Jan 20 03:49:12 crc systemd[1]: Starting Kubernetes Kubelet... Jan 20 03:49:12 crc restorecon[4693]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:12 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:13 crc restorecon[4693]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 03:49:13 crc restorecon[4693]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 20 03:49:13 crc kubenswrapper[4898]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:49:13 crc kubenswrapper[4898]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 20 03:49:13 crc kubenswrapper[4898]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:49:13 crc kubenswrapper[4898]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:49:13 crc kubenswrapper[4898]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 20 03:49:13 crc kubenswrapper[4898]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.517156 4898 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520239 4898 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520254 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520258 4898 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520263 4898 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520266 4898 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520270 4898 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520274 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520278 4898 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520282 4898 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520290 4898 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520296 4898 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520301 4898 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520305 4898 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520310 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520313 4898 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520317 4898 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520320 4898 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520324 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520327 4898 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520331 4898 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520334 4898 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520338 4898 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520341 4898 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520345 4898 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520348 4898 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520352 4898 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520355 4898 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520359 4898 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520362 4898 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520366 4898 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520369 4898 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520373 4898 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520376 4898 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520380 4898 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520383 4898 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520387 4898 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520390 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520394 4898 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520398 4898 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520402 4898 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520405 4898 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520409 4898 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520412 4898 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520417 4898 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520421 4898 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520437 4898 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520441 4898 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520447 4898 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520450 4898 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520454 4898 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520458 4898 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520461 4898 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520465 4898 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520468 4898 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520472 4898 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520476 4898 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520497 4898 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520501 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520505 4898 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520509 4898 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520513 4898 feature_gate.go:330] unrecognized feature gate: Example Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520517 4898 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520521 4898 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520525 4898 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520528 4898 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520532 4898 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520536 4898 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520539 4898 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520544 4898 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520547 4898 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.520552 4898 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520898 4898 flags.go:64] FLAG: --address="0.0.0.0" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520910 4898 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520919 4898 flags.go:64] FLAG: --anonymous-auth="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520925 4898 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520932 4898 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520936 4898 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520942 4898 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520948 4898 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520952 4898 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520957 4898 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520963 4898 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520968 4898 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520972 4898 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520977 4898 flags.go:64] FLAG: --cgroup-root="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520981 4898 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520985 4898 flags.go:64] FLAG: --client-ca-file="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520989 4898 flags.go:64] FLAG: --cloud-config="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520993 4898 flags.go:64] FLAG: --cloud-provider="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.520997 4898 flags.go:64] FLAG: --cluster-dns="[]" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521002 4898 flags.go:64] FLAG: --cluster-domain="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521007 4898 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521011 4898 flags.go:64] FLAG: --config-dir="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521015 4898 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521019 4898 flags.go:64] FLAG: --container-log-max-files="5" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521024 4898 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521028 4898 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521032 4898 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521036 4898 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521041 4898 flags.go:64] FLAG: --contention-profiling="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521044 4898 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521048 4898 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521053 4898 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521057 4898 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521062 4898 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521067 4898 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521071 4898 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521076 4898 flags.go:64] FLAG: --enable-load-reader="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521080 4898 flags.go:64] FLAG: --enable-server="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521084 4898 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521091 4898 flags.go:64] FLAG: --event-burst="100" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521095 4898 flags.go:64] FLAG: --event-qps="50" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521099 4898 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521103 4898 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521108 4898 flags.go:64] FLAG: --eviction-hard="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521112 4898 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521117 4898 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521121 4898 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521126 4898 flags.go:64] FLAG: --eviction-soft="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521130 4898 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521134 4898 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521138 4898 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521143 4898 flags.go:64] FLAG: --experimental-mounter-path="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521147 4898 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521151 4898 flags.go:64] FLAG: --fail-swap-on="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521155 4898 flags.go:64] FLAG: --feature-gates="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521161 4898 flags.go:64] FLAG: --file-check-frequency="20s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521165 4898 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521171 4898 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521175 4898 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521179 4898 flags.go:64] FLAG: --healthz-port="10248" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521183 4898 flags.go:64] FLAG: --help="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521187 4898 flags.go:64] FLAG: --hostname-override="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521191 4898 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521195 4898 flags.go:64] FLAG: --http-check-frequency="20s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521199 4898 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521204 4898 flags.go:64] FLAG: --image-credential-provider-config="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521208 4898 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521212 4898 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521216 4898 flags.go:64] FLAG: --image-service-endpoint="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521221 4898 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521225 4898 flags.go:64] FLAG: --kube-api-burst="100" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521229 4898 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521233 4898 flags.go:64] FLAG: --kube-api-qps="50" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521237 4898 flags.go:64] FLAG: --kube-reserved="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521241 4898 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521245 4898 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521249 4898 flags.go:64] FLAG: --kubelet-cgroups="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521253 4898 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521257 4898 flags.go:64] FLAG: --lock-file="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521261 4898 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521265 4898 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521269 4898 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521276 4898 flags.go:64] FLAG: --log-json-split-stream="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521280 4898 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521284 4898 flags.go:64] FLAG: --log-text-split-stream="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521288 4898 flags.go:64] FLAG: --logging-format="text" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521292 4898 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521296 4898 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521301 4898 flags.go:64] FLAG: --manifest-url="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521304 4898 flags.go:64] FLAG: --manifest-url-header="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521310 4898 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521314 4898 flags.go:64] FLAG: --max-open-files="1000000" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521319 4898 flags.go:64] FLAG: --max-pods="110" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521324 4898 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521329 4898 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521333 4898 flags.go:64] FLAG: --memory-manager-policy="None" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521337 4898 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521341 4898 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521345 4898 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521350 4898 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521359 4898 flags.go:64] FLAG: --node-status-max-images="50" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521363 4898 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521368 4898 flags.go:64] FLAG: --oom-score-adj="-999" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521372 4898 flags.go:64] FLAG: --pod-cidr="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521376 4898 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521382 4898 flags.go:64] FLAG: --pod-manifest-path="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521386 4898 flags.go:64] FLAG: --pod-max-pids="-1" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521390 4898 flags.go:64] FLAG: --pods-per-core="0" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521394 4898 flags.go:64] FLAG: --port="10250" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521398 4898 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521402 4898 flags.go:64] FLAG: --provider-id="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521406 4898 flags.go:64] FLAG: --qos-reserved="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521410 4898 flags.go:64] FLAG: --read-only-port="10255" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521414 4898 flags.go:64] FLAG: --register-node="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521418 4898 flags.go:64] FLAG: --register-schedulable="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521422 4898 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521442 4898 flags.go:64] FLAG: --registry-burst="10" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521447 4898 flags.go:64] FLAG: --registry-qps="5" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521452 4898 flags.go:64] FLAG: --reserved-cpus="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521456 4898 flags.go:64] FLAG: --reserved-memory="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521461 4898 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521465 4898 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521469 4898 flags.go:64] FLAG: --rotate-certificates="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521473 4898 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521477 4898 flags.go:64] FLAG: --runonce="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521481 4898 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521486 4898 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521490 4898 flags.go:64] FLAG: --seccomp-default="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521494 4898 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521498 4898 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521503 4898 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521519 4898 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521525 4898 flags.go:64] FLAG: --storage-driver-password="root" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521529 4898 flags.go:64] FLAG: --storage-driver-secure="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521534 4898 flags.go:64] FLAG: --storage-driver-table="stats" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521538 4898 flags.go:64] FLAG: --storage-driver-user="root" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521542 4898 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521546 4898 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521550 4898 flags.go:64] FLAG: --system-cgroups="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521554 4898 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521561 4898 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521565 4898 flags.go:64] FLAG: --tls-cert-file="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521569 4898 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521574 4898 flags.go:64] FLAG: --tls-min-version="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521578 4898 flags.go:64] FLAG: --tls-private-key-file="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521582 4898 flags.go:64] FLAG: --topology-manager-policy="none" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521586 4898 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521590 4898 flags.go:64] FLAG: --topology-manager-scope="container" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521594 4898 flags.go:64] FLAG: --v="2" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521602 4898 flags.go:64] FLAG: --version="false" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521608 4898 flags.go:64] FLAG: --vmodule="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521613 4898 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521617 4898 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521709 4898 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521714 4898 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521718 4898 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521723 4898 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521727 4898 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521731 4898 feature_gate.go:330] unrecognized feature gate: Example Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521736 4898 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521741 4898 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521745 4898 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521749 4898 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521753 4898 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521758 4898 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521763 4898 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521767 4898 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521771 4898 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521775 4898 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521778 4898 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521782 4898 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521785 4898 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521789 4898 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521793 4898 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521796 4898 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521800 4898 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521803 4898 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521807 4898 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521811 4898 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521814 4898 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521818 4898 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521827 4898 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521831 4898 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521834 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521837 4898 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521841 4898 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521844 4898 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521848 4898 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521851 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521855 4898 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521859 4898 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521863 4898 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521868 4898 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521873 4898 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521877 4898 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521880 4898 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521884 4898 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521887 4898 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521891 4898 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521894 4898 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521898 4898 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521901 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521905 4898 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521909 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521913 4898 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521917 4898 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521920 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521924 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521927 4898 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521930 4898 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521934 4898 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521937 4898 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521941 4898 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521946 4898 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521949 4898 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521953 4898 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521956 4898 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521960 4898 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521963 4898 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521966 4898 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521970 4898 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521974 4898 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521978 4898 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.521982 4898 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.521988 4898 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.536191 4898 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.536242 4898 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536383 4898 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536397 4898 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536407 4898 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536416 4898 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536424 4898 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536454 4898 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536462 4898 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536471 4898 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536479 4898 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536486 4898 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536494 4898 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536505 4898 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536518 4898 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536527 4898 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536536 4898 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536545 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536553 4898 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536561 4898 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536570 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536578 4898 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536587 4898 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536596 4898 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536604 4898 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536614 4898 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536622 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536630 4898 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536638 4898 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536647 4898 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536655 4898 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536664 4898 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536672 4898 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536684 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536694 4898 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536703 4898 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536712 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536722 4898 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536730 4898 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536738 4898 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536746 4898 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536754 4898 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536762 4898 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536769 4898 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536777 4898 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536786 4898 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536794 4898 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536801 4898 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536809 4898 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536817 4898 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536824 4898 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536832 4898 feature_gate.go:330] unrecognized feature gate: Example Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536840 4898 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536850 4898 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536859 4898 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536868 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536876 4898 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536886 4898 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536896 4898 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536905 4898 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536913 4898 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536921 4898 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536929 4898 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536937 4898 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536945 4898 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536953 4898 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536964 4898 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536974 4898 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536983 4898 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536991 4898 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.536999 4898 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537008 4898 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537017 4898 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.537031 4898 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537292 4898 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537306 4898 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537314 4898 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537323 4898 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537331 4898 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537339 4898 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537346 4898 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537354 4898 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537363 4898 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537370 4898 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537378 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537387 4898 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537395 4898 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537403 4898 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537411 4898 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537420 4898 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537427 4898 feature_gate.go:330] unrecognized feature gate: Example Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537457 4898 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537466 4898 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537475 4898 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537483 4898 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537492 4898 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537500 4898 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537508 4898 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537517 4898 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537525 4898 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537532 4898 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537540 4898 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537548 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537556 4898 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537564 4898 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537572 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537581 4898 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537592 4898 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537602 4898 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537612 4898 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537621 4898 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537629 4898 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537639 4898 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537648 4898 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537656 4898 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537664 4898 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537671 4898 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537680 4898 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537688 4898 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537696 4898 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537704 4898 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537711 4898 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537719 4898 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537727 4898 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537738 4898 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537747 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537755 4898 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537763 4898 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537771 4898 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537779 4898 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537787 4898 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537795 4898 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537803 4898 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537812 4898 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537823 4898 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537833 4898 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537842 4898 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537853 4898 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537863 4898 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537873 4898 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537883 4898 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537891 4898 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537900 4898 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537909 4898 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.537918 4898 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.537930 4898 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.538212 4898 server.go:940] "Client rotation is on, will bootstrap in background" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.543145 4898 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.543296 4898 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.544136 4898 server.go:997] "Starting client certificate rotation" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.544172 4898 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.544561 4898 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-08 15:51:36.674864594 +0000 UTC Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.544709 4898 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.551994 4898 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.554547 4898 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.555839 4898 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.567670 4898 log.go:25] "Validated CRI v1 runtime API" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.590932 4898 log.go:25] "Validated CRI v1 image API" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.593559 4898 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.598097 4898 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-20-03-44-52-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.598172 4898 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.629164 4898 manager.go:217] Machine: {Timestamp:2026-01-20 03:49:13.626842663 +0000 UTC m=+0.226630602 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:143a7ca7-6529-4cf4-be5d-89f92f602735 BootID:d10a9157-1f00-4a30-b3ba-08cfa97c4549 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:51:db:9c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:51:db:9c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a6:6d:85 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d2:df:2c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7f:62:51 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ec:65:bc Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b2:b6:7c:27:bd:0f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:da:a0:e4:d4:dd:5c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.629610 4898 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.629858 4898 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.630694 4898 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.631023 4898 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.631090 4898 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.631561 4898 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.631582 4898 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.631858 4898 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.631924 4898 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.632298 4898 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.632465 4898 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.633358 4898 kubelet.go:418] "Attempting to sync node with API server" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.633391 4898 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.633454 4898 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.633480 4898 kubelet.go:324] "Adding apiserver pod source" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.633500 4898 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.635884 4898 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.635883 4898 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.636026 4898 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.636039 4898 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.636284 4898 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.636895 4898 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.638083 4898 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.638868 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.638907 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.638922 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.638936 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.638958 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.638972 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.638986 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.639008 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.639025 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.639039 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.639059 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.639073 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.639648 4898 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.640496 4898 server.go:1280] "Started kubelet" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.640834 4898 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.640939 4898 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.641030 4898 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.642562 4898 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 03:49:13 crc systemd[1]: Started Kubernetes Kubelet. Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.644846 4898 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c53dafbb0ee69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 03:49:13.640414825 +0000 UTC m=+0.240202714,LastTimestamp:2026-01-20 03:49:13.640414825 +0000 UTC m=+0.240202714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.646962 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.647054 4898 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.647723 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:39:26.759933901 +0000 UTC Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.648583 4898 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.649319 4898 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.649347 4898 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.649401 4898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.649773 4898 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.653130 4898 server.go:460] "Adding debug handlers to kubelet server" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.654464 4898 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.654511 4898 factory.go:55] Registering systemd factory Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.654532 4898 factory.go:221] Registration of the systemd container factory successfully Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.654809 4898 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.655035 4898 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.655855 4898 factory.go:153] Registering CRI-O factory Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.655895 4898 factory.go:221] Registration of the crio container factory successfully Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.655943 4898 factory.go:103] Registering Raw factory Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.655976 4898 manager.go:1196] Started watching for new ooms in manager Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.660657 4898 manager.go:319] Starting recovery of all containers Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674193 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674344 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674371 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674393 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674420 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674461 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674481 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674508 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674532 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674552 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674571 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674592 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674613 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674638 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674657 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674677 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674701 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674720 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674739 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674760 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674786 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674815 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674835 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674855 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674950 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674970 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.674993 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675046 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675067 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675087 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675106 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675126 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675149 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675168 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675225 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675245 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675264 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675286 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675335 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675356 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675404 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675422 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675512 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675532 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675582 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675603 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675626 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675646 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675671 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675691 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675712 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675731 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675758 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.675780 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676679 4898 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676739 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676763 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676784 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676806 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676826 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676845 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676864 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676888 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676908 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676931 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676950 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676970 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.676989 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677009 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677054 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677073 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677092 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677111 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677130 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677149 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677172 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677191 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677218 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677238 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677259 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677283 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677302 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677332 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677352 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677372 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677390 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677412 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677454 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677474 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677494 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677512 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677531 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677556 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677577 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677595 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677622 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677642 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677704 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677905 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677932 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677954 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.677975 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678003 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678023 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678050 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678263 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678318 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678345 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678370 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678405 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678472 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678497 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678517 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678540 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678591 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678617 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678639 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678658 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678679 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678764 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678783 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678802 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678879 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678941 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678961 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.678981 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679022 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679041 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679091 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679116 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679211 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679294 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679323 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679346 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679372 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679397 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679423 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679481 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679552 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679573 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679593 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679611 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679629 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679649 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679668 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679820 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679867 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679886 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679905 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679925 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679946 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679965 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.679984 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680005 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680079 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680100 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680119 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680138 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680182 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680203 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680245 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680302 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680364 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680383 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680402 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680561 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680584 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680603 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680621 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680641 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680691 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680712 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680731 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680749 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680769 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680788 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680806 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680826 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680875 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680893 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680912 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680931 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680948 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680969 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.680989 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681008 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681052 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681080 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681099 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681117 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681170 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681202 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681260 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681282 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681303 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681321 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681383 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681403 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681425 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681513 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681576 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681604 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681632 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681715 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681773 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681793 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681814 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681840 4898 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681864 4898 reconstruct.go:97] "Volume reconstruction finished" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.681878 4898 reconciler.go:26] "Reconciler: start to sync state" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.694399 4898 manager.go:324] Recovery completed Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.708946 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.713941 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.714193 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.714327 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.717653 4898 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.719512 4898 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.719546 4898 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.719579 4898 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.719975 4898 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.720010 4898 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.720032 4898 kubelet.go:2335] "Starting kubelet main sync loop" Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.720079 4898 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 03:49:13 crc kubenswrapper[4898]: W0120 03:49:13.720722 4898 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.720789 4898 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.749472 4898 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.749930 4898 policy_none.go:49] "None policy: Start" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.751097 4898 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.751135 4898 state_mem.go:35] "Initializing new in-memory state store" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.802196 4898 manager.go:334] "Starting Device Plugin manager" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.802313 4898 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.802343 4898 server.go:79] "Starting device plugin registration server" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.803285 4898 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.803329 4898 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.803690 4898 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.803876 4898 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.803909 4898 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.815385 4898 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.820627 4898 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.820715 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.821656 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.821686 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.821696 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.821838 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.821967 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.822007 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.822741 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.822777 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.822788 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.822924 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.822966 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.822984 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.822992 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.823243 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.823345 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.824039 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.824090 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.824109 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.824274 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.824497 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.824612 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.824664 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.824693 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.824708 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.825919 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.825943 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.825958 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.825991 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.826035 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.826056 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.826126 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.826296 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.826337 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.827002 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.827027 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.827039 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.827180 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.827208 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.827734 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.827799 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.827813 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.827920 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.827952 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.827968 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.850547 4898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.884862 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.884925 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.884958 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885042 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885066 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885185 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885252 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885303 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885353 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885397 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885475 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885522 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885569 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885595 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.885647 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.903930 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.905524 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.905585 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.905602 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.905641 4898 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 03:49:13 crc kubenswrapper[4898]: E0120 03:49:13.906314 4898 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987105 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987187 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987218 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987249 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987279 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987307 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987332 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987358 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987415 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987462 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987492 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987496 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987590 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987550 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987604 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987708 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987648 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987585 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987519 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.988234 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987561 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.987542 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.988587 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.988683 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.991322 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.991466 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.991549 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.991596 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.991680 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:13 crc kubenswrapper[4898]: I0120 03:49:13.991695 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.106902 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.108649 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.108713 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.108733 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.108774 4898 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 03:49:14 crc kubenswrapper[4898]: E0120 03:49:14.109422 4898 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.160321 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.178557 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 03:49:14 crc kubenswrapper[4898]: W0120 03:49:14.191255 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2a26e10de7d0e1676a173d7e1b69a2edf6ede4c38d662c28a89d42b6c6e9349c WatchSource:0}: Error finding container 2a26e10de7d0e1676a173d7e1b69a2edf6ede4c38d662c28a89d42b6c6e9349c: Status 404 returned error can't find the container with id 2a26e10de7d0e1676a173d7e1b69a2edf6ede4c38d662c28a89d42b6c6e9349c Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.203000 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:14 crc kubenswrapper[4898]: W0120 03:49:14.205467 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ce24edd41266ee8f311290fe5b86d560650753e8fcff3a5e6767d84f225dae07 WatchSource:0}: Error finding container ce24edd41266ee8f311290fe5b86d560650753e8fcff3a5e6767d84f225dae07: Status 404 returned error can't find the container with id ce24edd41266ee8f311290fe5b86d560650753e8fcff3a5e6767d84f225dae07 Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.213246 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:14 crc kubenswrapper[4898]: W0120 03:49:14.219033 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d87eb29383b16fc1d7b200b9f5412b68a985f4faf9a2d35944cd4b557837dd11 WatchSource:0}: Error finding container d87eb29383b16fc1d7b200b9f5412b68a985f4faf9a2d35944cd4b557837dd11: Status 404 returned error can't find the container with id d87eb29383b16fc1d7b200b9f5412b68a985f4faf9a2d35944cd4b557837dd11 Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.220902 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 03:49:14 crc kubenswrapper[4898]: W0120 03:49:14.239377 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3b1ee6f01e3ebf97ecd97f9623a677f5176f256ee6c918a6bb432d5f05bf1d52 WatchSource:0}: Error finding container 3b1ee6f01e3ebf97ecd97f9623a677f5176f256ee6c918a6bb432d5f05bf1d52: Status 404 returned error can't find the container with id 3b1ee6f01e3ebf97ecd97f9623a677f5176f256ee6c918a6bb432d5f05bf1d52 Jan 20 03:49:14 crc kubenswrapper[4898]: W0120 03:49:14.249118 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-3bf9b27b029373c183e3453e5e59a8dc500ab991f4d61477f9c48077a2121208 WatchSource:0}: Error finding container 3bf9b27b029373c183e3453e5e59a8dc500ab991f4d61477f9c48077a2121208: Status 404 returned error can't find the container with id 3bf9b27b029373c183e3453e5e59a8dc500ab991f4d61477f9c48077a2121208 Jan 20 03:49:14 crc kubenswrapper[4898]: E0120 03:49:14.251831 4898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.510246 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.513282 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.513346 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.513365 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.513404 4898 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 03:49:14 crc kubenswrapper[4898]: E0120 03:49:14.514117 4898 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.642411 4898 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.648450 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:44:18.055828684 +0000 UTC Jan 20 03:49:14 crc kubenswrapper[4898]: W0120 03:49:14.706097 4898 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 20 03:49:14 crc kubenswrapper[4898]: E0120 03:49:14.706201 4898 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.729040 4898 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47" exitCode=0 Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.729179 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47"} Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.729371 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d87eb29383b16fc1d7b200b9f5412b68a985f4faf9a2d35944cd4b557837dd11"} Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.729572 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.731906 4898 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845" exitCode=0 Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.731990 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.732031 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.732045 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.731992 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845"} Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.732082 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ce24edd41266ee8f311290fe5b86d560650753e8fcff3a5e6767d84f225dae07"} Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.732328 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.733570 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.733594 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.733624 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.733633 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.734707 4898 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349" exitCode=0 Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.734768 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349"} Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.734827 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2a26e10de7d0e1676a173d7e1b69a2edf6ede4c38d662c28a89d42b6c6e9349c"} Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.734986 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.738644 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.738687 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.738707 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.738730 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.738733 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.738882 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.740487 4898 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0" exitCode=0 Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.740572 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0"} Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.740619 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3bf9b27b029373c183e3453e5e59a8dc500ab991f4d61477f9c48077a2121208"} Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.740725 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.741667 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.741698 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.741715 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:14 crc kubenswrapper[4898]: W0120 03:49:14.742950 4898 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 20 03:49:14 crc kubenswrapper[4898]: E0120 03:49:14.743014 4898 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.743059 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc"} Jan 20 03:49:14 crc kubenswrapper[4898]: I0120 03:49:14.743104 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3b1ee6f01e3ebf97ecd97f9623a677f5176f256ee6c918a6bb432d5f05bf1d52"} Jan 20 03:49:14 crc kubenswrapper[4898]: W0120 03:49:14.851680 4898 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 20 03:49:14 crc kubenswrapper[4898]: E0120 03:49:14.851794 4898 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 20 03:49:15 crc kubenswrapper[4898]: E0120 03:49:15.053410 4898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Jan 20 03:49:15 crc kubenswrapper[4898]: W0120 03:49:15.140407 4898 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 20 03:49:15 crc kubenswrapper[4898]: E0120 03:49:15.140515 4898 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.314862 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.316174 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.316211 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.316222 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.316247 4898 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 03:49:15 crc kubenswrapper[4898]: E0120 03:49:15.316678 4898 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.642146 4898 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.648808 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:57:19.410031501 +0000 UTC Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.658227 4898 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 03:49:15 crc kubenswrapper[4898]: E0120 03:49:15.660075 4898 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.756037 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443"} Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.756098 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d"} Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.759464 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be"} Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.759514 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20"} Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.762182 4898 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4" exitCode=0 Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.762241 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4"} Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.762378 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.763856 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.763895 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.763918 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.774865 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4357a76e77495fe84cc44f53161be1e9a51d2221f1b0b277caf9ace2c2d7d418"} Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.774988 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.776517 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.776547 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.776562 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.787770 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010"} Jan 20 03:49:15 crc kubenswrapper[4898]: I0120 03:49:15.787831 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d"} Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.650048 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:19:44.588943651 +0000 UTC Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.795415 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad"} Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.795531 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222"} Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.795561 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b"} Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.795605 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.796936 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.796981 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.796994 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.800290 4898 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea" exitCode=0 Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.800381 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea"} Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.800905 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.802455 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.802507 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.802526 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.804876 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069"} Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.804932 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.806465 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.806503 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.806517 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.809925 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31"} Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.810042 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.811180 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.811226 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.811244 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.916891 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.919113 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.919189 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.919211 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:16 crc kubenswrapper[4898]: I0120 03:49:16.919249 4898 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.618669 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.651083 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 12:10:02.643016792 +0000 UTC Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.755071 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.818864 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb"} Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.818923 4898 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.818950 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4"} Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.818970 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920"} Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.818991 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.818933 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.819065 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.821201 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.821212 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.821309 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.821326 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.821324 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.821501 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.821944 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.821992 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.822012 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:17 crc kubenswrapper[4898]: I0120 03:49:17.848765 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.652108 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:58:42.825190317 +0000 UTC Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.831561 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32"} Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.831632 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd"} Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.831672 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.831829 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.832019 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.833484 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.833505 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.833532 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.833550 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.833571 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.833610 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.834577 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.834645 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:18 crc kubenswrapper[4898]: I0120 03:49:18.834663 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:19 crc kubenswrapper[4898]: I0120 03:49:19.652754 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:26:18.957936847 +0000 UTC Jan 20 03:49:19 crc kubenswrapper[4898]: I0120 03:49:19.835257 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:19 crc kubenswrapper[4898]: I0120 03:49:19.835413 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:19 crc kubenswrapper[4898]: I0120 03:49:19.836926 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:19 crc kubenswrapper[4898]: I0120 03:49:19.836977 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:19 crc kubenswrapper[4898]: I0120 03:49:19.836997 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:19 crc kubenswrapper[4898]: I0120 03:49:19.837558 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:19 crc kubenswrapper[4898]: I0120 03:49:19.837614 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:19 crc kubenswrapper[4898]: I0120 03:49:19.837640 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:19 crc kubenswrapper[4898]: I0120 03:49:19.898729 4898 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 03:49:20 crc kubenswrapper[4898]: I0120 03:49:20.653852 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:57:54.703542369 +0000 UTC Jan 20 03:49:20 crc kubenswrapper[4898]: I0120 03:49:20.983408 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 20 03:49:20 crc kubenswrapper[4898]: I0120 03:49:20.983949 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:20 crc kubenswrapper[4898]: I0120 03:49:20.986176 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:20 crc kubenswrapper[4898]: I0120 03:49:20.986272 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:20 crc kubenswrapper[4898]: I0120 03:49:20.986292 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:21 crc kubenswrapper[4898]: I0120 03:49:21.004175 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:21 crc kubenswrapper[4898]: I0120 03:49:21.004580 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:21 crc kubenswrapper[4898]: I0120 03:49:21.006585 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:21 crc kubenswrapper[4898]: I0120 03:49:21.006710 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:21 crc kubenswrapper[4898]: I0120 03:49:21.006773 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:21 crc kubenswrapper[4898]: I0120 03:49:21.121624 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 20 03:49:21 crc kubenswrapper[4898]: I0120 03:49:21.654897 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:53:30.291464645 +0000 UTC Jan 20 03:49:21 crc kubenswrapper[4898]: I0120 03:49:21.843131 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:21 crc kubenswrapper[4898]: I0120 03:49:21.844750 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:21 crc kubenswrapper[4898]: I0120 03:49:21.844822 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:21 crc kubenswrapper[4898]: I0120 03:49:21.844850 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:22 crc kubenswrapper[4898]: I0120 03:49:22.304928 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:22 crc kubenswrapper[4898]: I0120 03:49:22.305165 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:22 crc kubenswrapper[4898]: I0120 03:49:22.306676 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:22 crc kubenswrapper[4898]: I0120 03:49:22.306736 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:22 crc kubenswrapper[4898]: I0120 03:49:22.306760 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:22 crc kubenswrapper[4898]: I0120 03:49:22.616963 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:22 crc kubenswrapper[4898]: I0120 03:49:22.655938 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:16:10.797467415 +0000 UTC Jan 20 03:49:22 crc kubenswrapper[4898]: I0120 03:49:22.846694 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:22 crc kubenswrapper[4898]: I0120 03:49:22.848345 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:22 crc kubenswrapper[4898]: I0120 03:49:22.848406 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:22 crc kubenswrapper[4898]: I0120 03:49:22.848424 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:23 crc kubenswrapper[4898]: I0120 03:49:23.512005 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:23 crc kubenswrapper[4898]: I0120 03:49:23.519493 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:23 crc kubenswrapper[4898]: I0120 03:49:23.656132 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:57:29.662007258 +0000 UTC Jan 20 03:49:23 crc kubenswrapper[4898]: E0120 03:49:23.815656 4898 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 03:49:23 crc kubenswrapper[4898]: I0120 03:49:23.849722 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:23 crc kubenswrapper[4898]: I0120 03:49:23.851035 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:23 crc kubenswrapper[4898]: I0120 03:49:23.851089 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:23 crc kubenswrapper[4898]: I0120 03:49:23.851108 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:24 crc kubenswrapper[4898]: I0120 03:49:24.656738 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 06:08:23.615671188 +0000 UTC Jan 20 03:49:24 crc kubenswrapper[4898]: I0120 03:49:24.852777 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:24 crc kubenswrapper[4898]: I0120 03:49:24.854512 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:24 crc kubenswrapper[4898]: I0120 03:49:24.854569 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:24 crc kubenswrapper[4898]: I0120 03:49:24.854586 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:24 crc kubenswrapper[4898]: I0120 03:49:24.862992 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:25 crc kubenswrapper[4898]: I0120 03:49:25.391135 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:25 crc kubenswrapper[4898]: I0120 03:49:25.657181 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:16:20.653572218 +0000 UTC Jan 20 03:49:25 crc kubenswrapper[4898]: I0120 03:49:25.855932 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:25 crc kubenswrapper[4898]: I0120 03:49:25.857408 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:25 crc kubenswrapper[4898]: I0120 03:49:25.857496 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:25 crc kubenswrapper[4898]: I0120 03:49:25.857518 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:26 crc kubenswrapper[4898]: I0120 03:49:26.642711 4898 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 20 03:49:26 crc kubenswrapper[4898]: E0120 03:49:26.654809 4898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 20 03:49:26 crc kubenswrapper[4898]: I0120 03:49:26.657793 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:46:49.435892917 +0000 UTC Jan 20 03:49:26 crc kubenswrapper[4898]: I0120 03:49:26.685497 4898 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 03:49:26 crc kubenswrapper[4898]: I0120 03:49:26.685592 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 03:49:26 crc kubenswrapper[4898]: I0120 03:49:26.691580 4898 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 03:49:26 crc kubenswrapper[4898]: I0120 03:49:26.691668 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 03:49:26 crc kubenswrapper[4898]: I0120 03:49:26.859574 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:26 crc kubenswrapper[4898]: I0120 03:49:26.860689 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:26 crc kubenswrapper[4898]: I0120 03:49:26.860760 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:26 crc kubenswrapper[4898]: I0120 03:49:26.860780 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:27 crc kubenswrapper[4898]: I0120 03:49:27.628468 4898 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]log ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]etcd ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/generic-apiserver-start-informers ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/priority-and-fairness-filter ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/start-apiextensions-informers ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/start-apiextensions-controllers ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/crd-informer-synced ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/start-system-namespaces-controller ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 20 03:49:27 crc kubenswrapper[4898]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 20 03:49:27 crc kubenswrapper[4898]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/bootstrap-controller ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/start-kube-aggregator-informers ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/apiservice-registration-controller ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/apiservice-discovery-controller ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]autoregister-completion ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/apiservice-openapi-controller ok Jan 20 03:49:27 crc kubenswrapper[4898]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 20 03:49:27 crc kubenswrapper[4898]: livez check failed Jan 20 03:49:27 crc kubenswrapper[4898]: I0120 03:49:27.628566 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:49:27 crc kubenswrapper[4898]: I0120 03:49:27.658680 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:59:17.876924602 +0000 UTC Jan 20 03:49:27 crc kubenswrapper[4898]: I0120 03:49:27.850035 4898 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 20 03:49:27 crc kubenswrapper[4898]: I0120 03:49:27.850109 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 20 03:49:28 crc kubenswrapper[4898]: I0120 03:49:28.391613 4898 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" start-of-body= Jan 20 03:49:28 crc kubenswrapper[4898]: I0120 03:49:28.391759 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" Jan 20 03:49:28 crc kubenswrapper[4898]: I0120 03:49:28.576053 4898 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 20 03:49:28 crc kubenswrapper[4898]: I0120 03:49:28.576165 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 20 03:49:28 crc kubenswrapper[4898]: I0120 03:49:28.659488 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:55:01.388071646 +0000 UTC Jan 20 03:49:29 crc kubenswrapper[4898]: I0120 03:49:29.659800 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:06:14.136953706 +0000 UTC Jan 20 03:49:30 crc kubenswrapper[4898]: I0120 03:49:30.660236 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:02:08.653698695 +0000 UTC Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.014672 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.015214 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.017610 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.017769 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.017870 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.038814 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.660419 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:28:02.784865113 +0000 UTC Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.669630 4898 trace.go:236] Trace[1863032647]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 03:49:16.942) (total time: 14727ms): Jan 20 03:49:31 crc kubenswrapper[4898]: Trace[1863032647]: ---"Objects listed" error: 14727ms (03:49:31.669) Jan 20 03:49:31 crc kubenswrapper[4898]: Trace[1863032647]: [14.7273696s] [14.7273696s] END Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.669689 4898 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.671345 4898 trace.go:236] Trace[934001895]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 03:49:17.611) (total time: 14059ms): Jan 20 03:49:31 crc kubenswrapper[4898]: Trace[934001895]: ---"Objects listed" error: 14059ms (03:49:31.671) Jan 20 03:49:31 crc kubenswrapper[4898]: Trace[934001895]: [14.059712104s] [14.059712104s] END Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.671744 4898 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 03:49:31 crc kubenswrapper[4898]: E0120 03:49:31.682671 4898 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.682906 4898 trace.go:236] Trace[1268164689]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 03:49:17.228) (total time: 14454ms): Jan 20 03:49:31 crc kubenswrapper[4898]: Trace[1268164689]: ---"Objects listed" error: 14454ms (03:49:31.682) Jan 20 03:49:31 crc kubenswrapper[4898]: Trace[1268164689]: [14.454635046s] [14.454635046s] END Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.683361 4898 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.684065 4898 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.684282 4898 trace.go:236] Trace[734892242]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 03:49:17.828) (total time: 13855ms): Jan 20 03:49:31 crc kubenswrapper[4898]: Trace[734892242]: ---"Objects listed" error: 13855ms (03:49:31.683) Jan 20 03:49:31 crc kubenswrapper[4898]: Trace[734892242]: [13.855475861s] [13.855475861s] END Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.684323 4898 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 03:49:31 crc kubenswrapper[4898]: I0120 03:49:31.691716 4898 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.629865 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.631276 4898 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.631368 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.637873 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.644153 4898 apiserver.go:52] "Watching apiserver" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.651176 4898 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.651653 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.652123 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.652219 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.652312 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.652381 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.652525 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.652796 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.652927 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.653134 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.653316 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.655001 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.655605 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.655610 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.656061 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.657703 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.658114 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.658311 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.658359 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.659367 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.660621 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 00:11:32.76995931 +0000 UTC Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.696968 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.718419 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.732865 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.749899 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.750814 4898 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.768010 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.785003 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788316 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788371 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788413 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788487 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788520 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788557 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788598 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788632 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788667 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788702 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788736 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788771 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788801 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788833 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788864 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788887 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788939 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789054 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789100 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789141 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789177 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789215 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789251 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789282 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789312 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789343 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789379 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789419 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789484 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789519 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789553 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789587 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789663 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789825 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789904 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.788986 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789945 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789984 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790028 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790061 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790148 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790182 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790214 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790245 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790504 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790546 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790702 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790736 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790768 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790801 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790835 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790866 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790955 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790990 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791029 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791111 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791142 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791175 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791210 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791245 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791280 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791311 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791343 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791375 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791407 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791470 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791505 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791566 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791616 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791649 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791682 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791713 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791752 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791787 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791821 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791855 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791892 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791930 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791971 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792011 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792055 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792094 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792135 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792178 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792218 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792255 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790052 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790309 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795086 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789399 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789512 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790546 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790604 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790777 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.789186 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790911 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790878 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.790909 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.791945 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792075 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792078 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792095 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792269 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.792296 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:49:33.292254186 +0000 UTC m=+19.892042075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792508 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792767 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.792772 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.793111 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.793164 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.793268 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.793271 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.793311 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.793686 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.793812 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.793818 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.794405 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.794819 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.794858 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795012 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795179 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795267 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795306 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795310 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795413 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795660 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795682 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795761 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795799 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795822 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795852 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795878 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795891 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.795904 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796033 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796078 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796235 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796276 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796293 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796315 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796497 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796541 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796581 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796623 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796658 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796699 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796736 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796771 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796804 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796838 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796848 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796872 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796913 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796950 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.796983 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797020 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797055 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797115 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797243 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797319 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797321 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797356 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797385 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797408 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797441 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797615 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.797622 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798040 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798150 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798247 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798289 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798447 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798464 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798508 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798707 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798751 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798786 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798772 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.798971 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799088 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799174 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799297 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799383 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799406 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799480 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799521 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799564 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799587 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799599 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799662 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799668 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799786 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799825 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799861 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799864 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799900 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799936 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.799978 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.800016 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.800033 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.800050 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.800252 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.800279 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.800364 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.800727 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.801266 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.801340 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.801621 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.802209 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.802243 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.802250 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.802270 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.802642 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.802691 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.802767 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.802841 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.800052 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.802953 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803023 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803080 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803132 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803186 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803239 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803294 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803344 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803384 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803395 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803515 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803555 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803590 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803631 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803666 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803698 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803732 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803767 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803787 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803801 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804057 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804105 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804204 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804230 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804255 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804280 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804304 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804327 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804351 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804389 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804412 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804456 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804484 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804519 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804548 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804612 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804639 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804668 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804701 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804731 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804756 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804778 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804813 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804846 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804875 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804905 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804938 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804969 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805000 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805034 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805064 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805097 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805129 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805160 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805190 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805222 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805254 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805289 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805319 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805350 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805382 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805415 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805472 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805504 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805536 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808155 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808524 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808625 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808698 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808742 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808806 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808893 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808946 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.809147 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.809200 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.809396 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.809545 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.810547 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.810627 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.810677 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.810876 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.810903 4898 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.810929 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.810953 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.810985 4898 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811007 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811029 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811058 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811078 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811098 4898 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811119 4898 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811148 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811171 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811192 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811213 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811239 4898 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811260 4898 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811280 4898 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811307 4898 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811351 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811372 4898 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811392 4898 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811423 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811466 4898 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811488 4898 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811510 4898 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811539 4898 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811562 4898 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811582 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811610 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811696 4898 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811718 4898 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811741 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811769 4898 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811791 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811811 4898 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811831 4898 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811860 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811885 4898 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811905 4898 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811926 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811953 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811973 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.811993 4898 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812012 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812038 4898 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812060 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812081 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812110 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812131 4898 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812151 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812171 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812197 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812217 4898 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812237 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812257 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812281 4898 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812302 4898 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812529 4898 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812564 4898 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812587 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812614 4898 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812635 4898 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812661 4898 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812682 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812702 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812722 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812747 4898 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812767 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812786 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812806 4898 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812833 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812853 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812876 4898 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812901 4898 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812920 4898 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812941 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812961 4898 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.812990 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.813024 4898 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.813045 4898 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.813065 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.813091 4898 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.813111 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.813133 4898 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.817964 4898 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803813 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803830 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.803833 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804205 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804594 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.804816 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805262 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.805502 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.806525 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.806739 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.807302 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.807913 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808031 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808278 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808474 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.808791 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.809010 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.809524 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.810040 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.810224 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.810265 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.812288 4898 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.814997 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.815654 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.815778 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.815767 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.815885 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.816057 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.816352 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.816510 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.816835 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.817498 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.817624 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.813197 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.818420 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.818787 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.819398 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.819469 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.820096 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.820234 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.820414 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.821118 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.821277 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.821331 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.821504 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.821637 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.821973 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.821781 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.822106 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.822411 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.822866 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.822956 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.823643 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.823684 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.824491 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.825251 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:33.325190516 +0000 UTC m=+19.924978435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.823113 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.828120 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.828134 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.828424 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.828531 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.828608 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.828493 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.829040 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.829094 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.829226 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.829535 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.829761 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.829788 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.830061 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.830233 4898 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.830267 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.822133 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.832914 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.833289 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.832980 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:33.332926586 +0000 UTC m=+19.932714485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.833036 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.838551 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.839318 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.839706 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.839873 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.839995 4898 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.840229 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:33.340205991 +0000 UTC m=+19.939993880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.841134 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.841640 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.844796 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.844844 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.844866 4898 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.844955 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:33.344928896 +0000 UTC m=+19.944717016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.850149 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.850554 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.851127 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.851614 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.851679 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.852041 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.852052 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.852306 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.852761 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.853181 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.853628 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.853671 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.855359 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.856141 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.856484 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.856660 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.856686 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.856694 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.856163 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.856785 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.857018 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.857100 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.857590 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.858320 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.858352 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.858382 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.858672 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.858723 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.859001 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.859394 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.862613 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.862783 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.863723 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.864598 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.864667 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.865419 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.866807 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.878973 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.881172 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.884322 4898 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad" exitCode=255 Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.884473 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad"} Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.885099 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: E0120 03:49:32.893216 4898 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.893604 4898 scope.go:117] "RemoveContainer" containerID="21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.894117 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.901395 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.906948 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.909746 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.912743 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914191 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914291 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914357 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914378 4898 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914397 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914416 4898 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914463 4898 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914482 4898 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914500 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914519 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914539 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914557 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914575 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914595 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914613 4898 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914631 4898 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914647 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914665 4898 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914683 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914702 4898 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914722 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914740 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914758 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914776 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914793 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914809 4898 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914826 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914844 4898 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914862 4898 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914879 4898 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914896 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914914 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914930 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914947 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914963 4898 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914982 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.914999 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915016 4898 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915034 4898 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915051 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915067 4898 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915084 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915101 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915118 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915135 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915152 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915169 4898 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915186 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915202 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915219 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915236 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915253 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915270 4898 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915286 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915303 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915321 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915338 4898 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915357 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915376 4898 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915392 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915409 4898 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915425 4898 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915475 4898 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915495 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915512 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915529 4898 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915546 4898 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915564 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915582 4898 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915602 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915603 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915619 4898 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915707 4898 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915728 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915745 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915759 4898 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915775 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915793 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915805 4898 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915818 4898 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915830 4898 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915816 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915845 4898 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915924 4898 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915941 4898 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915960 4898 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915974 4898 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.915986 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916000 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916013 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916027 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916040 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916051 4898 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916063 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916075 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916089 4898 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916102 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916115 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916127 4898 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916139 4898 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916151 4898 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916163 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916174 4898 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916186 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916197 4898 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916210 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916222 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916234 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916244 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916258 4898 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916270 4898 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916282 4898 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.916294 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.923073 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.935896 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.952715 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.964556 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.974758 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.983759 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.989198 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 03:49:32 crc kubenswrapper[4898]: W0120 03:49:32.993245 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-35c96ba13ead7e5aaea745ca1d83d217d44604cd625cee83a5b13a29a1327835 WatchSource:0}: Error finding container 35c96ba13ead7e5aaea745ca1d83d217d44604cd625cee83a5b13a29a1327835: Status 404 returned error can't find the container with id 35c96ba13ead7e5aaea745ca1d83d217d44604cd625cee83a5b13a29a1327835 Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.996112 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:32 crc kubenswrapper[4898]: I0120 03:49:32.999076 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.015387 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 03:49:33 crc kubenswrapper[4898]: W0120 03:49:33.019043 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-6f9ffab978892cd3e26935f8b60773dda1020c0ce05dadd226746f3d4989074c WatchSource:0}: Error finding container 6f9ffab978892cd3e26935f8b60773dda1020c0ce05dadd226746f3d4989074c: Status 404 returned error can't find the container with id 6f9ffab978892cd3e26935f8b60773dda1020c0ce05dadd226746f3d4989074c Jan 20 03:49:33 crc kubenswrapper[4898]: W0120 03:49:33.031673 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b3e729140cb942a519240f272ee60cf9d9426efa276497cb371248fe9dfcb3a8 WatchSource:0}: Error finding container b3e729140cb942a519240f272ee60cf9d9426efa276497cb371248fe9dfcb3a8: Status 404 returned error can't find the container with id b3e729140cb942a519240f272ee60cf9d9426efa276497cb371248fe9dfcb3a8 Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.320423 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.320752 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:49:34.32071845 +0000 UTC m=+20.920506349 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.422689 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.422758 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.422799 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.422837 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.422914 4898 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.422945 4898 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.423010 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:34.422988287 +0000 UTC m=+21.022776386 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.423035 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:34.423024538 +0000 UTC m=+21.022812677 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.423092 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.423115 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.423132 4898 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.423182 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:34.423161072 +0000 UTC m=+21.022948961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.423265 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.423315 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.423332 4898 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:33 crc kubenswrapper[4898]: E0120 03:49:33.423370 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:34.423357948 +0000 UTC m=+21.023145847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.660849 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:26:36.68203754 +0000 UTC Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.730323 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.732055 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.737504 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.740546 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.741182 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.742120 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.742899 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.743954 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.744633 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.745532 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.746001 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.746692 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.747520 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.748005 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.749731 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.750218 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.754875 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.755259 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.755953 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.757086 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.757563 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.758550 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.758962 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.759984 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.760369 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.761452 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.762099 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.762601 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.763645 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.764114 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.764931 4898 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.765032 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.766914 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.767862 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.768205 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.768296 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.773143 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.774982 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.779514 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.780623 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.782058 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.782772 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.783611 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.785004 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.786304 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.786969 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.788190 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.788931 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.794095 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.794797 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.796063 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.796773 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.797529 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.798822 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.799422 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.807721 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.832780 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.860651 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.886038 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.888844 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25"} Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.888902 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"35c96ba13ead7e5aaea745ca1d83d217d44604cd625cee83a5b13a29a1327835"} Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.890316 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.892068 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402"} Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.892355 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.893704 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9"} Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.893738 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c"} Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.893751 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b3e729140cb942a519240f272ee60cf9d9426efa276497cb371248fe9dfcb3a8"} Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.894674 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6f9ffab978892cd3e26935f8b60773dda1020c0ce05dadd226746f3d4989074c"} Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.904854 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.918110 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.930941 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.942572 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.961401 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.981692 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:33 crc kubenswrapper[4898]: I0120 03:49:33.999036 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.021843 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.037386 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.052063 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.069501 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.329377 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.329632 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:49:36.32958984 +0000 UTC m=+22.929377739 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.430706 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.430777 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.430807 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.430833 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.430989 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.431007 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.430999 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.431059 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.431067 4898 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.431081 4898 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.431119 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:36.431103994 +0000 UTC m=+23.030891863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.431165 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:36.431134905 +0000 UTC m=+23.030922794 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.431021 4898 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.431221 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:36.431208527 +0000 UTC m=+23.030996416 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.431008 4898 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.431275 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:36.431262969 +0000 UTC m=+23.031050858 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.661930 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:54:46.110838142 +0000 UTC Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.720675 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.720699 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.720827 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.720693 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.720927 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.720994 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.883740 4898 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.886174 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.886270 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.886298 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.886389 4898 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.896097 4898 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.896520 4898 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.897936 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.897991 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.898011 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.898083 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.898114 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:34Z","lastTransitionTime":"2026-01-20T03:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.936471 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.941354 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.941416 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.941472 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.941500 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.941520 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:34Z","lastTransitionTime":"2026-01-20T03:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.960124 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.965161 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.965212 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.965229 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.965253 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.965270 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:34Z","lastTransitionTime":"2026-01-20T03:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:34 crc kubenswrapper[4898]: E0120 03:49:34.982888 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.988031 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.988060 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.988069 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.988083 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:34 crc kubenswrapper[4898]: I0120 03:49:34.988092 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:34Z","lastTransitionTime":"2026-01-20T03:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:35 crc kubenswrapper[4898]: E0120 03:49:35.035107 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.044915 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.044976 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.044999 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.045020 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.045034 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:35Z","lastTransitionTime":"2026-01-20T03:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:35 crc kubenswrapper[4898]: E0120 03:49:35.064766 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: E0120 03:49:35.064975 4898 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.066623 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.066658 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.066674 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.066692 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.066705 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:35Z","lastTransitionTime":"2026-01-20T03:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.169342 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.169415 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.169463 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.169492 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.169523 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:35Z","lastTransitionTime":"2026-01-20T03:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.271988 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.272074 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.272095 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.272120 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.272138 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:35Z","lastTransitionTime":"2026-01-20T03:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.374715 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.374795 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.374813 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.374840 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.374858 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:35Z","lastTransitionTime":"2026-01-20T03:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.397763 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.404396 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.410084 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.424779 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.446362 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.468968 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.477654 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.477722 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.477742 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.477770 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.477787 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:35Z","lastTransitionTime":"2026-01-20T03:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.490829 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.513020 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.534423 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.557115 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.581130 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.581187 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.581204 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.581229 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.581247 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:35Z","lastTransitionTime":"2026-01-20T03:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.594068 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.619171 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.654895 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.662322 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:44:28.651976016 +0000 UTC Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.679147 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.684950 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.685009 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.685029 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.685055 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.685074 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:35Z","lastTransitionTime":"2026-01-20T03:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.700222 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.723203 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.747620 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.769890 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.788562 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.788631 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.788652 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.788679 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.788697 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:35Z","lastTransitionTime":"2026-01-20T03:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.795789 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.814955 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:35Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.892025 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.892084 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.892103 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.892129 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.892149 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:35Z","lastTransitionTime":"2026-01-20T03:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.995048 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.995113 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.995130 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.995158 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:35 crc kubenswrapper[4898]: I0120 03:49:35.995177 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:35Z","lastTransitionTime":"2026-01-20T03:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.097705 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.097762 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.097781 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.097805 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.097823 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:36Z","lastTransitionTime":"2026-01-20T03:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.201150 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.201258 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.201276 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.201306 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.201326 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:36Z","lastTransitionTime":"2026-01-20T03:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.304513 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.304563 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.304582 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.304609 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.304627 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:36Z","lastTransitionTime":"2026-01-20T03:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.347156 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.347203 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:49:40.347148944 +0000 UTC m=+26.946936843 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.408044 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.408111 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.408129 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.408158 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.408177 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:36Z","lastTransitionTime":"2026-01-20T03:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.448604 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.448674 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.448715 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.448756 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.448784 4898 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.448868 4898 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.448977 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.449021 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.449040 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.449042 4898 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.449062 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.448877 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:40.448849694 +0000 UTC m=+27.048637593 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.449082 4898 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.449111 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:40.449090451 +0000 UTC m=+27.048878320 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.449135 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:40.449124712 +0000 UTC m=+27.048912581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.449155 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:40.449145383 +0000 UTC m=+27.048933252 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.510359 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.510421 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.510470 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.510495 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.510512 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:36Z","lastTransitionTime":"2026-01-20T03:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.613605 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.613667 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.613685 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.613710 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.613729 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:36Z","lastTransitionTime":"2026-01-20T03:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.662998 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:33:15.138312274 +0000 UTC Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.716831 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.717188 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.717355 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.717559 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.717757 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:36Z","lastTransitionTime":"2026-01-20T03:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.720760 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.720906 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.721128 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.721195 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.721290 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:36 crc kubenswrapper[4898]: E0120 03:49:36.721420 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.823031 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.823111 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.823131 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.823184 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.823208 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:36Z","lastTransitionTime":"2026-01-20T03:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.906552 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0"} Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.926145 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.926201 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.926214 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.926237 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.926250 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:36Z","lastTransitionTime":"2026-01-20T03:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.929975 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:36Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.948920 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:36Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.971584 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:36Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:36 crc kubenswrapper[4898]: I0120 03:49:36.994249 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:36Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.007097 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.029273 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.030318 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.030379 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.030393 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.030416 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.030450 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:37Z","lastTransitionTime":"2026-01-20T03:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.048512 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.066079 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.087897 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.133017 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.133054 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.133065 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.133080 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.133091 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:37Z","lastTransitionTime":"2026-01-20T03:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.184954 4898 csr.go:261] certificate signing request csr-hg9gh is approved, waiting to be issued Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.194862 4898 csr.go:257] certificate signing request csr-hg9gh is issued Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.236260 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.236343 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.236353 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.236370 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.236382 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:37Z","lastTransitionTime":"2026-01-20T03:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.282190 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8tbv5"] Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.282702 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8tbv5" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.288163 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.291881 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.292013 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.323279 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.338499 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.338533 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.338541 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.338555 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.338565 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:37Z","lastTransitionTime":"2026-01-20T03:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.340310 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.351942 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.356622 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vclz\" (UniqueName: \"kubernetes.io/projected/352ab345-1f5f-42e3-b57c-63eec90a7fa6-kube-api-access-7vclz\") pod \"node-resolver-8tbv5\" (UID: \"352ab345-1f5f-42e3-b57c-63eec90a7fa6\") " pod="openshift-dns/node-resolver-8tbv5" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.356663 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/352ab345-1f5f-42e3-b57c-63eec90a7fa6-hosts-file\") pod \"node-resolver-8tbv5\" (UID: \"352ab345-1f5f-42e3-b57c-63eec90a7fa6\") " pod="openshift-dns/node-resolver-8tbv5" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.370307 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.382125 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.393156 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.407067 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.418862 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.435014 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.440890 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.440924 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.440933 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.440954 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.440963 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:37Z","lastTransitionTime":"2026-01-20T03:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.447918 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.457254 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/352ab345-1f5f-42e3-b57c-63eec90a7fa6-hosts-file\") pod \"node-resolver-8tbv5\" (UID: \"352ab345-1f5f-42e3-b57c-63eec90a7fa6\") " pod="openshift-dns/node-resolver-8tbv5" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.457339 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vclz\" (UniqueName: \"kubernetes.io/projected/352ab345-1f5f-42e3-b57c-63eec90a7fa6-kube-api-access-7vclz\") pod \"node-resolver-8tbv5\" (UID: \"352ab345-1f5f-42e3-b57c-63eec90a7fa6\") " pod="openshift-dns/node-resolver-8tbv5" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.457420 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/352ab345-1f5f-42e3-b57c-63eec90a7fa6-hosts-file\") pod \"node-resolver-8tbv5\" (UID: \"352ab345-1f5f-42e3-b57c-63eec90a7fa6\") " pod="openshift-dns/node-resolver-8tbv5" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.474999 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vclz\" (UniqueName: \"kubernetes.io/projected/352ab345-1f5f-42e3-b57c-63eec90a7fa6-kube-api-access-7vclz\") pod \"node-resolver-8tbv5\" (UID: \"352ab345-1f5f-42e3-b57c-63eec90a7fa6\") " pod="openshift-dns/node-resolver-8tbv5" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.544067 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.544123 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.544135 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.544155 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.544175 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:37Z","lastTransitionTime":"2026-01-20T03:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.592934 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8tbv5" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.647597 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.647961 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.648057 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.648153 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.648238 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:37Z","lastTransitionTime":"2026-01-20T03:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.666636 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:33:37.838952894 +0000 UTC Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.751378 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.751448 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.751461 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.751481 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.751496 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:37Z","lastTransitionTime":"2026-01-20T03:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.854383 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.854440 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.854451 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.854468 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.854479 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:37Z","lastTransitionTime":"2026-01-20T03:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.910139 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8tbv5" event={"ID":"352ab345-1f5f-42e3-b57c-63eec90a7fa6","Type":"ContainerStarted","Data":"08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.910201 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8tbv5" event={"ID":"352ab345-1f5f-42e3-b57c-63eec90a7fa6","Type":"ContainerStarted","Data":"32fa251a09fc859f8236d952e6bc07c52afff70458eacab4b918b589062444f9"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.948305 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.956931 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.956984 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.956993 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.957014 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.957024 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:37Z","lastTransitionTime":"2026-01-20T03:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.973660 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:37 crc kubenswrapper[4898]: I0120 03:49:37.993154 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.006300 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.018251 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.033562 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.046277 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.059159 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.059516 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.059552 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.059560 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.059579 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.059590 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:38Z","lastTransitionTime":"2026-01-20T03:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.069997 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.095730 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.147527 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-cwlf6"] Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.147973 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.148593 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-c9l7w"] Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.149681 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.153174 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hzxwz"] Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.154576 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.171985 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.172255 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.172282 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.172393 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.172457 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-897rl"] Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.172259 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.172591 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.172889 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.172891 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.172945 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.172993 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.173082 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.173082 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.173121 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.173276 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.173298 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.176793 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.177056 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.177329 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.178959 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.179621 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.179659 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.179673 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.179693 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.179707 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:38Z","lastTransitionTime":"2026-01-20T03:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.183528 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.196407 4898 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-20 03:44:37 +0000 UTC, rotation deadline is 2026-10-23 10:30:44.943996921 +0000 UTC Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.196487 4898 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6630h41m6.74751353s for next certificate rotation Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.209213 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.222959 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.241632 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.257971 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.267984 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272634 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-env-overrides\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272666 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272686 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-var-lib-cni-bin\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272701 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-var-lib-kubelet\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272717 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-daemon-config\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272731 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-netns\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272747 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-etc-openvswitch\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272761 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272780 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-systemd-units\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272795 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aef68392-4b9d-4a0c-a90e-8f04051fda21-rootfs\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272808 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aef68392-4b9d-4a0c-a90e-8f04051fda21-proxy-tls\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272822 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1288aab6-09fa-40a3-8ff8-e00002a32d61-cni-binary-copy\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272837 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovn-node-metrics-cert\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272880 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-cnibin\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272895 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-cnibin\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272910 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-hostroot\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272923 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-var-lib-openvswitch\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272939 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-log-socket\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272954 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-config\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272969 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z65r\" (UniqueName: \"kubernetes.io/projected/aef68392-4b9d-4a0c-a90e-8f04051fda21-kube-api-access-7z65r\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272982 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-var-lib-cni-multus\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.272998 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b9485c68-c108-4b2d-9278-97f57ed65716-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273011 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-cni-dir\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273054 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8rsj\" (UniqueName: \"kubernetes.io/projected/b9485c68-c108-4b2d-9278-97f57ed65716-kube-api-access-m8rsj\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273071 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-os-release\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273085 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-bin\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273101 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b9485c68-c108-4b2d-9278-97f57ed65716-cni-binary-copy\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273121 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-run-netns\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273136 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-slash\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273150 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-openvswitch\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273164 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273185 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-os-release\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273199 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-system-cni-dir\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273215 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-socket-dir-parent\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273229 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-systemd\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273243 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-kubelet\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273257 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-node-log\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273270 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-netd\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273284 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-script-lib\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273298 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-etc-kubernetes\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273313 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aef68392-4b9d-4a0c-a90e-8f04051fda21-mcd-auth-proxy-config\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273328 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-system-cni-dir\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273343 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-run-k8s-cni-cncf-io\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273358 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-conf-dir\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273372 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-run-multus-certs\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273386 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcsws\" (UniqueName: \"kubernetes.io/projected/1288aab6-09fa-40a3-8ff8-e00002a32d61-kube-api-access-mcsws\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273400 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-ovn\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.273415 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g868\" (UniqueName: \"kubernetes.io/projected/91759377-eaa1-4bcf-99f3-bad12cd513c2-kube-api-access-9g868\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.276640 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.282280 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.282310 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.282321 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.282341 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.282354 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:38Z","lastTransitionTime":"2026-01-20T03:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.288896 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.302820 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.314859 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.327390 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.337800 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.352020 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.364424 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374688 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcsws\" (UniqueName: \"kubernetes.io/projected/1288aab6-09fa-40a3-8ff8-e00002a32d61-kube-api-access-mcsws\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374731 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-system-cni-dir\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374750 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-run-k8s-cni-cncf-io\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374766 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-conf-dir\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374785 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-run-multus-certs\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374801 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-ovn\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374830 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g868\" (UniqueName: \"kubernetes.io/projected/91759377-eaa1-4bcf-99f3-bad12cd513c2-kube-api-access-9g868\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374848 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374865 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-var-lib-cni-bin\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374880 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-var-lib-kubelet\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374895 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-env-overrides\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374910 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-daemon-config\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374926 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-netns\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374945 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-systemd-units\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374965 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-etc-openvswitch\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.374982 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375005 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-cnibin\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375024 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aef68392-4b9d-4a0c-a90e-8f04051fda21-rootfs\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375041 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aef68392-4b9d-4a0c-a90e-8f04051fda21-proxy-tls\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375056 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1288aab6-09fa-40a3-8ff8-e00002a32d61-cni-binary-copy\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375093 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovn-node-metrics-cert\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375112 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-cnibin\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375130 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z65r\" (UniqueName: \"kubernetes.io/projected/aef68392-4b9d-4a0c-a90e-8f04051fda21-kube-api-access-7z65r\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375148 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-var-lib-cni-multus\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375162 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-hostroot\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375177 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-var-lib-openvswitch\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375192 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-log-socket\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375206 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-config\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375231 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b9485c68-c108-4b2d-9278-97f57ed65716-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375247 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-cni-dir\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375243 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-systemd-units\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375264 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8rsj\" (UniqueName: \"kubernetes.io/projected/b9485c68-c108-4b2d-9278-97f57ed65716-kube-api-access-m8rsj\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375342 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-os-release\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375375 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-bin\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375407 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375480 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-os-release\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375516 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b9485c68-c108-4b2d-9278-97f57ed65716-cni-binary-copy\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375536 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-etc-openvswitch\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375550 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-var-lib-cni-bin\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375547 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-run-netns\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375577 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-run-multus-certs\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375591 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-ovn\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375601 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aef68392-4b9d-4a0c-a90e-8f04051fda21-rootfs\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375596 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-slash\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375626 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-conf-dir\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375630 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-openvswitch\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375646 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-openvswitch\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375663 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-system-cni-dir\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375666 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-var-lib-kubelet\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375689 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-socket-dir-parent\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375709 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375704 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-bin\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375732 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-systemd\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375711 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-run-k8s-cni-cncf-io\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375713 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-systemd\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375746 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-run-netns\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375630 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-slash\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375568 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375585 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-cnibin\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375822 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-netns\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375825 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-host-var-lib-cni-multus\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375834 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-system-cni-dir\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375860 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-var-lib-openvswitch\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375867 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-os-release\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375886 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-socket-dir-parent\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.375999 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-cni-dir\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376028 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-log-socket\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376072 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-cnibin\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376111 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-system-cni-dir\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376143 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-os-release\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376194 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-hostroot\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376216 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b9485c68-c108-4b2d-9278-97f57ed65716-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376275 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-etc-kubernetes\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376316 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-kubelet\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376340 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-node-log\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376373 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-node-log\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376374 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-netd\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376396 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-kubelet\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376417 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-script-lib\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376453 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1288aab6-09fa-40a3-8ff8-e00002a32d61-etc-kubernetes\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376466 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1288aab6-09fa-40a3-8ff8-e00002a32d61-cni-binary-copy\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376507 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aef68392-4b9d-4a0c-a90e-8f04051fda21-mcd-auth-proxy-config\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376512 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-netd\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376565 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-env-overrides\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376563 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1288aab6-09fa-40a3-8ff8-e00002a32d61-multus-daemon-config\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376804 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b9485c68-c108-4b2d-9278-97f57ed65716-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.376900 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-config\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.377164 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aef68392-4b9d-4a0c-a90e-8f04051fda21-mcd-auth-proxy-config\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.377184 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-script-lib\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.377405 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b9485c68-c108-4b2d-9278-97f57ed65716-cni-binary-copy\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.378942 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.379610 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aef68392-4b9d-4a0c-a90e-8f04051fda21-proxy-tls\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.379650 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovn-node-metrics-cert\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.384411 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.384464 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.384476 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.384491 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.384502 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:38Z","lastTransitionTime":"2026-01-20T03:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.391414 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g868\" (UniqueName: \"kubernetes.io/projected/91759377-eaa1-4bcf-99f3-bad12cd513c2-kube-api-access-9g868\") pod \"ovnkube-node-hzxwz\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.392519 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8rsj\" (UniqueName: \"kubernetes.io/projected/b9485c68-c108-4b2d-9278-97f57ed65716-kube-api-access-m8rsj\") pod \"multus-additional-cni-plugins-c9l7w\" (UID: \"b9485c68-c108-4b2d-9278-97f57ed65716\") " pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.393136 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z65r\" (UniqueName: \"kubernetes.io/projected/aef68392-4b9d-4a0c-a90e-8f04051fda21-kube-api-access-7z65r\") pod \"machine-config-daemon-cwlf6\" (UID: \"aef68392-4b9d-4a0c-a90e-8f04051fda21\") " pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.393590 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcsws\" (UniqueName: \"kubernetes.io/projected/1288aab6-09fa-40a3-8ff8-e00002a32d61-kube-api-access-mcsws\") pod \"multus-897rl\" (UID: \"1288aab6-09fa-40a3-8ff8-e00002a32d61\") " pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.395126 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.405037 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.416703 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.425954 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.435328 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.454611 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.460512 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.464319 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: W0120 03:49:38.470103 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaef68392_4b9d_4a0c_a90e_8f04051fda21.slice/crio-7d3857135cd0377cb265fcdca50f5933b8ea6cccaabe89da7ab014f2a5e6bd04 WatchSource:0}: Error finding container 7d3857135cd0377cb265fcdca50f5933b8ea6cccaabe89da7ab014f2a5e6bd04: Status 404 returned error can't find the container with id 7d3857135cd0377cb265fcdca50f5933b8ea6cccaabe89da7ab014f2a5e6bd04 Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.481246 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.481284 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.486079 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.486110 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.486119 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.486132 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.486142 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:38Z","lastTransitionTime":"2026-01-20T03:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.488681 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.493714 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-897rl" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.502570 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: W0120 03:49:38.504453 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91759377_eaa1_4bcf_99f3_bad12cd513c2.slice/crio-a6d470bab6bd1cc289aa8b22013ee39bba817c177737717bf4bffb1be73faa9c WatchSource:0}: Error finding container a6d470bab6bd1cc289aa8b22013ee39bba817c177737717bf4bffb1be73faa9c: Status 404 returned error can't find the container with id a6d470bab6bd1cc289aa8b22013ee39bba817c177737717bf4bffb1be73faa9c Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.515683 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.525671 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.588743 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.588771 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.588779 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.588792 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.588800 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:38Z","lastTransitionTime":"2026-01-20T03:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.666754 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:37:26.207899947 +0000 UTC Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.690918 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.690941 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.690949 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.690961 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.690970 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:38Z","lastTransitionTime":"2026-01-20T03:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.720334 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.720380 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.720579 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:38 crc kubenswrapper[4898]: E0120 03:49:38.720691 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:38 crc kubenswrapper[4898]: E0120 03:49:38.720810 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:38 crc kubenswrapper[4898]: E0120 03:49:38.720929 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.793117 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.793190 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.793204 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.793222 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.793233 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:38Z","lastTransitionTime":"2026-01-20T03:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.895584 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.895627 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.895641 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.895655 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.895664 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:38Z","lastTransitionTime":"2026-01-20T03:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.913896 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-897rl" event={"ID":"1288aab6-09fa-40a3-8ff8-e00002a32d61","Type":"ContainerStarted","Data":"61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.913947 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-897rl" event={"ID":"1288aab6-09fa-40a3-8ff8-e00002a32d61","Type":"ContainerStarted","Data":"71bb4fc3b39143ae5780a7d4f514c86e9724bbf9ab6ae9c631a4de843797ff57"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.915849 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630" exitCode=0 Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.915975 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.916070 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"a6d470bab6bd1cc289aa8b22013ee39bba817c177737717bf4bffb1be73faa9c"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.918346 4898 generic.go:334] "Generic (PLEG): container finished" podID="b9485c68-c108-4b2d-9278-97f57ed65716" containerID="3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170" exitCode=0 Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.918474 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" event={"ID":"b9485c68-c108-4b2d-9278-97f57ed65716","Type":"ContainerDied","Data":"3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.918669 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" event={"ID":"b9485c68-c108-4b2d-9278-97f57ed65716","Type":"ContainerStarted","Data":"b88b4de7294fcd7b4df014dec5f101b43f883cded4aa2733506fce944945c20d"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.929497 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.929777 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.929862 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"7d3857135cd0377cb265fcdca50f5933b8ea6cccaabe89da7ab014f2a5e6bd04"} Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.939350 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.981527 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.998572 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.998597 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.998604 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.998616 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:38 crc kubenswrapper[4898]: I0120 03:49:38.998625 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:38Z","lastTransitionTime":"2026-01-20T03:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.018562 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.036412 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.049853 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.063447 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.074163 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.090727 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.101660 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.102647 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.102682 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.102692 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.102709 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.102718 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:39Z","lastTransitionTime":"2026-01-20T03:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.120699 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.131778 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.149117 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.162233 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.174665 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.186785 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.205536 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.206283 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.206323 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.206336 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.206351 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.206361 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:39Z","lastTransitionTime":"2026-01-20T03:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.220057 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.238768 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.257628 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.274162 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.289481 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.306481 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.309875 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.309951 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.309973 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.310004 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.310024 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:39Z","lastTransitionTime":"2026-01-20T03:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.324822 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.342626 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.356953 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.377409 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.392713 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.403681 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.412335 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.412374 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.412394 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.412412 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.412450 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:39Z","lastTransitionTime":"2026-01-20T03:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.514494 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.514739 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.514752 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.514769 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.514781 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:39Z","lastTransitionTime":"2026-01-20T03:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.624558 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.624607 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.624623 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.624641 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.624654 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:39Z","lastTransitionTime":"2026-01-20T03:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.667882 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:07:35.005590685 +0000 UTC Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.726421 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.727310 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.727383 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.727465 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.727522 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:39Z","lastTransitionTime":"2026-01-20T03:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.830523 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.830894 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.830907 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.830931 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.830945 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:39Z","lastTransitionTime":"2026-01-20T03:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.932753 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.932780 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.932788 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.932801 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.932811 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:39Z","lastTransitionTime":"2026-01-20T03:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.938913 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.938947 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.938959 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.938971 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.938982 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.940665 4898 generic.go:334] "Generic (PLEG): container finished" podID="b9485c68-c108-4b2d-9278-97f57ed65716" containerID="32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b" exitCode=0 Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.940698 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" event={"ID":"b9485c68-c108-4b2d-9278-97f57ed65716","Type":"ContainerDied","Data":"32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b"} Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.964273 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.980330 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:39 crc kubenswrapper[4898]: I0120 03:49:39.998383 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.012555 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.032909 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.035119 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.035153 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.035165 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.035184 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.035197 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:40Z","lastTransitionTime":"2026-01-20T03:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.045173 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.062973 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.076762 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.092832 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.106186 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.118456 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.130335 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.142600 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.142648 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.142659 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.142676 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.142687 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:40Z","lastTransitionTime":"2026-01-20T03:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.145777 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.158699 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.244950 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.244992 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.245031 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.245050 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.245060 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:40Z","lastTransitionTime":"2026-01-20T03:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.347660 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.347719 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.347733 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.347758 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.347773 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:40Z","lastTransitionTime":"2026-01-20T03:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.398358 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.398689 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:49:48.39863373 +0000 UTC m=+34.998421639 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.450918 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.450962 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.450972 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.450987 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.450998 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:40Z","lastTransitionTime":"2026-01-20T03:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.499882 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.499944 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.499986 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.500035 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500076 4898 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500170 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:48.500147784 +0000 UTC m=+35.099935643 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500170 4898 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500198 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500226 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500244 4898 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500293 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:48.500271488 +0000 UTC m=+35.100059347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500313 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:48.500306349 +0000 UTC m=+35.100094208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500365 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500469 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500497 4898 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.500621 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:48.500583707 +0000 UTC m=+35.100371616 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.553161 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.553228 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.553247 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.553276 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.553295 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:40Z","lastTransitionTime":"2026-01-20T03:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.656255 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.656322 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.656342 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.656370 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.656389 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:40Z","lastTransitionTime":"2026-01-20T03:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.668905 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:26:52.197354422 +0000 UTC Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.720946 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.721000 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.721077 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.721167 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.721390 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:40 crc kubenswrapper[4898]: E0120 03:49:40.721721 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.740025 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-cc9r6"] Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.740782 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cc9r6" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.746923 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.747151 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.747509 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.748039 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.759333 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.759482 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.759570 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.759644 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.759716 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:40Z","lastTransitionTime":"2026-01-20T03:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.787838 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.806132 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.817568 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.832878 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.853712 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.861659 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.861711 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.861724 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.861743 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.861761 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:40Z","lastTransitionTime":"2026-01-20T03:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.872089 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.904801 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f739192d-fd16-4394-b6fc-742d14c876e4-host\") pod \"node-ca-cc9r6\" (UID: \"f739192d-fd16-4394-b6fc-742d14c876e4\") " pod="openshift-image-registry/node-ca-cc9r6" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.904850 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mbc\" (UniqueName: \"kubernetes.io/projected/f739192d-fd16-4394-b6fc-742d14c876e4-kube-api-access-d9mbc\") pod \"node-ca-cc9r6\" (UID: \"f739192d-fd16-4394-b6fc-742d14c876e4\") " pod="openshift-image-registry/node-ca-cc9r6" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.905040 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f739192d-fd16-4394-b6fc-742d14c876e4-serviceca\") pod \"node-ca-cc9r6\" (UID: \"f739192d-fd16-4394-b6fc-742d14c876e4\") " pod="openshift-image-registry/node-ca-cc9r6" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.916008 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.931646 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.946419 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.948139 4898 generic.go:334] "Generic (PLEG): container finished" podID="b9485c68-c108-4b2d-9278-97f57ed65716" containerID="232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d" exitCode=0 Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.948165 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" event={"ID":"b9485c68-c108-4b2d-9278-97f57ed65716","Type":"ContainerDied","Data":"232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.962748 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.969004 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.969049 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.969062 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.969082 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.969094 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:40Z","lastTransitionTime":"2026-01-20T03:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:40 crc kubenswrapper[4898]: I0120 03:49:40.989409 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:40Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.002663 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.005763 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f739192d-fd16-4394-b6fc-742d14c876e4-serviceca\") pod \"node-ca-cc9r6\" (UID: \"f739192d-fd16-4394-b6fc-742d14c876e4\") " pod="openshift-image-registry/node-ca-cc9r6" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.005810 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f739192d-fd16-4394-b6fc-742d14c876e4-host\") pod \"node-ca-cc9r6\" (UID: \"f739192d-fd16-4394-b6fc-742d14c876e4\") " pod="openshift-image-registry/node-ca-cc9r6" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.005833 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mbc\" (UniqueName: \"kubernetes.io/projected/f739192d-fd16-4394-b6fc-742d14c876e4-kube-api-access-d9mbc\") pod \"node-ca-cc9r6\" (UID: \"f739192d-fd16-4394-b6fc-742d14c876e4\") " pod="openshift-image-registry/node-ca-cc9r6" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.005932 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f739192d-fd16-4394-b6fc-742d14c876e4-host\") pod \"node-ca-cc9r6\" (UID: \"f739192d-fd16-4394-b6fc-742d14c876e4\") " pod="openshift-image-registry/node-ca-cc9r6" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.007528 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f739192d-fd16-4394-b6fc-742d14c876e4-serviceca\") pod \"node-ca-cc9r6\" (UID: \"f739192d-fd16-4394-b6fc-742d14c876e4\") " pod="openshift-image-registry/node-ca-cc9r6" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.016617 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.026916 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mbc\" (UniqueName: \"kubernetes.io/projected/f739192d-fd16-4394-b6fc-742d14c876e4-kube-api-access-d9mbc\") pod \"node-ca-cc9r6\" (UID: \"f739192d-fd16-4394-b6fc-742d14c876e4\") " pod="openshift-image-registry/node-ca-cc9r6" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.028104 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.045367 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.055543 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.058154 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cc9r6" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.072175 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.084953 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.085300 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.085313 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.085334 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.085347 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:41Z","lastTransitionTime":"2026-01-20T03:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:41 crc kubenswrapper[4898]: W0120 03:49:41.086972 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf739192d_fd16_4394_b6fc_742d14c876e4.slice/crio-60f9c60025f92765267a4d62c5da7d0721dfc3afc0b77cd42a6da15735472ff1 WatchSource:0}: Error finding container 60f9c60025f92765267a4d62c5da7d0721dfc3afc0b77cd42a6da15735472ff1: Status 404 returned error can't find the container with id 60f9c60025f92765267a4d62c5da7d0721dfc3afc0b77cd42a6da15735472ff1 Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.096199 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.109880 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.124517 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.140695 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.150878 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.162045 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.173354 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.184478 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.189371 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.189393 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.189402 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.189415 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.189424 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:41Z","lastTransitionTime":"2026-01-20T03:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.202742 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.213059 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.224000 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.242208 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.253227 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.262387 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.293510 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.293545 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.293555 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.293866 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.293907 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:41Z","lastTransitionTime":"2026-01-20T03:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.395927 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.395973 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.395983 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.395999 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.396013 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:41Z","lastTransitionTime":"2026-01-20T03:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.498927 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.498973 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.498983 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.499003 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.499015 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:41Z","lastTransitionTime":"2026-01-20T03:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.601503 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.601577 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.601590 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.601616 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.601631 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:41Z","lastTransitionTime":"2026-01-20T03:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.669611 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:18:50.499297926 +0000 UTC Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.705023 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.705091 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.705105 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.705129 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.705142 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:41Z","lastTransitionTime":"2026-01-20T03:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.808954 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.809041 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.809061 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.809094 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.809114 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:41Z","lastTransitionTime":"2026-01-20T03:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.912546 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.912617 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.912636 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.912661 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.912678 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:41Z","lastTransitionTime":"2026-01-20T03:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.954840 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cc9r6" event={"ID":"f739192d-fd16-4394-b6fc-742d14c876e4","Type":"ContainerStarted","Data":"ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a"} Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.954972 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cc9r6" event={"ID":"f739192d-fd16-4394-b6fc-742d14c876e4","Type":"ContainerStarted","Data":"60f9c60025f92765267a4d62c5da7d0721dfc3afc0b77cd42a6da15735472ff1"} Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.959984 4898 generic.go:334] "Generic (PLEG): container finished" podID="b9485c68-c108-4b2d-9278-97f57ed65716" containerID="b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9" exitCode=0 Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.960057 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" event={"ID":"b9485c68-c108-4b2d-9278-97f57ed65716","Type":"ContainerDied","Data":"b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9"} Jan 20 03:49:41 crc kubenswrapper[4898]: I0120 03:49:41.996122 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:41Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.016637 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.016685 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.016702 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.016725 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.016743 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:42Z","lastTransitionTime":"2026-01-20T03:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.018664 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.037162 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.063321 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.087714 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.110305 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.121042 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.121099 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.121117 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.121142 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.121161 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:42Z","lastTransitionTime":"2026-01-20T03:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.129142 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.143614 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.159305 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.176728 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.192316 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.211561 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.223467 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.223556 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.223637 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.223708 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.223843 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:42Z","lastTransitionTime":"2026-01-20T03:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.228335 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.243584 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.265365 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.278350 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.302745 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.320109 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.326244 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.326284 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.326293 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.326308 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.326320 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:42Z","lastTransitionTime":"2026-01-20T03:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.339688 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.353092 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.373995 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.389822 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.402613 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.417735 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.429112 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.429147 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.429161 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.429178 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.429190 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:42Z","lastTransitionTime":"2026-01-20T03:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.432719 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.445881 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.464569 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.478956 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.509486 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.524352 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:42Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.531385 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.531424 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.531453 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.531471 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.531483 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:42Z","lastTransitionTime":"2026-01-20T03:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.636021 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.636864 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.637028 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.637150 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.637258 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:42Z","lastTransitionTime":"2026-01-20T03:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.670683 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:32:22.996437565 +0000 UTC Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.721103 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:42 crc kubenswrapper[4898]: E0120 03:49:42.721284 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.721817 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:42 crc kubenswrapper[4898]: E0120 03:49:42.722044 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.722210 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:42 crc kubenswrapper[4898]: E0120 03:49:42.722346 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.742304 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.742513 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.742665 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.742798 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.742913 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:42Z","lastTransitionTime":"2026-01-20T03:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.847064 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.847881 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.848034 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.848153 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.848262 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:42Z","lastTransitionTime":"2026-01-20T03:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.950939 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.950999 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.951017 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.951042 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.951060 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:42Z","lastTransitionTime":"2026-01-20T03:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.984255 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2"} Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.992199 4898 generic.go:334] "Generic (PLEG): container finished" podID="b9485c68-c108-4b2d-9278-97f57ed65716" containerID="ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b" exitCode=0 Jan 20 03:49:42 crc kubenswrapper[4898]: I0120 03:49:42.992400 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" event={"ID":"b9485c68-c108-4b2d-9278-97f57ed65716","Type":"ContainerDied","Data":"ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b"} Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.013237 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.037633 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.054983 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.055054 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.055081 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.055115 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.055141 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:43Z","lastTransitionTime":"2026-01-20T03:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.056180 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.086517 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.108368 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.135758 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.154331 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.158851 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.158903 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.158923 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.158947 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.158964 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:43Z","lastTransitionTime":"2026-01-20T03:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.171826 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.190925 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.211839 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.236049 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.254914 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.261486 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.261551 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.261570 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.261593 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.261612 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:43Z","lastTransitionTime":"2026-01-20T03:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.273244 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.289978 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.302415 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.364882 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.364906 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.364915 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.364928 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.364938 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:43Z","lastTransitionTime":"2026-01-20T03:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.467969 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.468029 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.468050 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.468078 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.468100 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:43Z","lastTransitionTime":"2026-01-20T03:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.545252 4898 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.575473 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.575529 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.575546 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.575573 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.575591 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:43Z","lastTransitionTime":"2026-01-20T03:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.671626 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:34:09.160793751 +0000 UTC Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.679539 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.679598 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.679616 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.679646 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.679664 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:43Z","lastTransitionTime":"2026-01-20T03:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.744652 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.765703 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.783187 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.783256 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.783275 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.783304 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.783321 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:43Z","lastTransitionTime":"2026-01-20T03:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.787236 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.805078 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.826665 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.852956 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.872881 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.886585 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.886648 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.886670 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.886700 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.886719 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:43Z","lastTransitionTime":"2026-01-20T03:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.905960 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.941647 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.960311 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.977039 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.990382 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.990472 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.990493 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.990523 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.990543 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:43Z","lastTransitionTime":"2026-01-20T03:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.997588 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.999818 4898 generic.go:334] "Generic (PLEG): container finished" podID="b9485c68-c108-4b2d-9278-97f57ed65716" containerID="214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913" exitCode=0 Jan 20 03:49:43 crc kubenswrapper[4898]: I0120 03:49:43.999937 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" event={"ID":"b9485c68-c108-4b2d-9278-97f57ed65716","Type":"ContainerDied","Data":"214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913"} Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.019340 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.045514 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.061201 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.077569 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.102817 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.125974 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.126015 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.126024 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.126039 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.126069 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:44Z","lastTransitionTime":"2026-01-20T03:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.137715 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.157973 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.169087 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.177213 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.197107 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.206892 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.219971 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.227652 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.227690 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.227699 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.227717 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.227727 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:44Z","lastTransitionTime":"2026-01-20T03:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.243449 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.257776 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.274988 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.296481 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.310865 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.330129 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.330201 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.330219 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.330249 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.330266 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:44Z","lastTransitionTime":"2026-01-20T03:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.330376 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.433372 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.433466 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.433485 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.433512 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.433529 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:44Z","lastTransitionTime":"2026-01-20T03:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.536178 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.536588 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.536729 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.536881 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.537018 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:44Z","lastTransitionTime":"2026-01-20T03:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.640241 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.640304 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.640318 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.640366 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.640379 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:44Z","lastTransitionTime":"2026-01-20T03:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.672280 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:11:13.191124225 +0000 UTC Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.720920 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.721015 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.720943 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:44 crc kubenswrapper[4898]: E0120 03:49:44.721172 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:44 crc kubenswrapper[4898]: E0120 03:49:44.721462 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:44 crc kubenswrapper[4898]: E0120 03:49:44.721572 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.744989 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.745047 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.745065 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.745097 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.745130 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:44Z","lastTransitionTime":"2026-01-20T03:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.849866 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.849965 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.849986 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.850011 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.850043 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:44Z","lastTransitionTime":"2026-01-20T03:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.953279 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.953346 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.953365 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.953395 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:44 crc kubenswrapper[4898]: I0120 03:49:44.953421 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:44Z","lastTransitionTime":"2026-01-20T03:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.010817 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.011581 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.011642 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.019888 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" event={"ID":"b9485c68-c108-4b2d-9278-97f57ed65716","Type":"ContainerStarted","Data":"bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.030358 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.054756 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.056978 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.057027 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.057046 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.057071 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.057094 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.060486 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.066505 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.088425 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.111675 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.131918 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.156512 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.160744 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.160805 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.160820 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.160841 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.160854 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.176681 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.195038 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.215786 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.230993 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.249008 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.269423 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.269544 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.269566 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.269598 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.269622 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.273773 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.287646 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.287687 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.287705 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.287727 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.287747 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.293733 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: E0120 03:49:45.303865 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.308319 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.308536 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.308659 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.308824 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.308952 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.328289 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: E0120 03:49:45.330269 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.335710 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.335783 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.335808 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.335841 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.335869 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.347180 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: E0120 03:49:45.359415 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.363767 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.363821 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.363841 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.363871 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.363893 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.370284 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.386921 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: E0120 03:49:45.388847 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.394175 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.394207 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.394220 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.394240 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.394255 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: E0120 03:49:45.414300 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: E0120 03:49:45.414465 4898 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.416470 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.416497 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.416509 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.416525 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.416537 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.417738 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.437647 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.451941 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.465991 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.486856 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.504922 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.519538 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.519601 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.519619 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.519649 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.519668 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.522883 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.541381 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.559275 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.577081 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.593810 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.624206 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.624262 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.624280 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.624307 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.624328 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.630987 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.646018 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:45Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.673093 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:55:56.571516229 +0000 UTC Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.726516 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.726568 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.726586 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.726610 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.726627 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.829661 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.829704 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.829715 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.829731 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.829741 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.932942 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.933002 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.933020 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.933087 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:45 crc kubenswrapper[4898]: I0120 03:49:45.933109 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:45Z","lastTransitionTime":"2026-01-20T03:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.023227 4898 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.036181 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.036570 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.036588 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.036615 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.036633 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:46Z","lastTransitionTime":"2026-01-20T03:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.139723 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.139802 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.139829 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.139859 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.139882 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:46Z","lastTransitionTime":"2026-01-20T03:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.244026 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.244082 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.244097 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.244117 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.244133 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:46Z","lastTransitionTime":"2026-01-20T03:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.371484 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.371533 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.371548 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.371569 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.371586 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:46Z","lastTransitionTime":"2026-01-20T03:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.473778 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.473828 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.473840 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.473858 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.473870 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:46Z","lastTransitionTime":"2026-01-20T03:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.576704 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.576806 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.576827 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.576862 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.576888 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:46Z","lastTransitionTime":"2026-01-20T03:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.673284 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:44:57.590419883 +0000 UTC Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.679498 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.679552 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.679568 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.679589 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.679604 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:46Z","lastTransitionTime":"2026-01-20T03:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.720282 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.720341 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.720389 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:46 crc kubenswrapper[4898]: E0120 03:49:46.720459 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:46 crc kubenswrapper[4898]: E0120 03:49:46.720532 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:46 crc kubenswrapper[4898]: E0120 03:49:46.720713 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.782165 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.782218 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.782236 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.782259 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.782277 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:46Z","lastTransitionTime":"2026-01-20T03:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.885193 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.885230 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.885240 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.885259 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.885270 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:46Z","lastTransitionTime":"2026-01-20T03:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.987889 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.987946 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.987961 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.987986 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:46 crc kubenswrapper[4898]: I0120 03:49:46.988005 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:46Z","lastTransitionTime":"2026-01-20T03:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.026915 4898 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.091816 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.091872 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.091889 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.091912 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.091931 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:47Z","lastTransitionTime":"2026-01-20T03:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.194744 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.195243 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.195330 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.195423 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.195543 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:47Z","lastTransitionTime":"2026-01-20T03:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.299949 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.300024 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.300042 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.300070 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.300091 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:47Z","lastTransitionTime":"2026-01-20T03:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.403565 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.403643 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.403671 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.403713 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.403736 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:47Z","lastTransitionTime":"2026-01-20T03:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.506803 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.506890 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.506914 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.506949 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.506988 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:47Z","lastTransitionTime":"2026-01-20T03:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.610416 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.610518 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.610537 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.610568 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.610589 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:47Z","lastTransitionTime":"2026-01-20T03:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.673698 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:13:42.926050286 +0000 UTC Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.714921 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.714981 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.715000 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.715027 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.715050 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:47Z","lastTransitionTime":"2026-01-20T03:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.818766 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.818839 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.818857 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.818883 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.818906 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:47Z","lastTransitionTime":"2026-01-20T03:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.856240 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.889784 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:47Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.913048 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:47Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.921801 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.921873 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.921894 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.921928 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.921948 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:47Z","lastTransitionTime":"2026-01-20T03:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.933569 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:47Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.949004 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:47Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.968273 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:47Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:47 crc kubenswrapper[4898]: I0120 03:49:47.987047 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:47Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.002685 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.016423 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.024626 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.024659 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.024668 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.024690 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.024701 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:48Z","lastTransitionTime":"2026-01-20T03:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.032084 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.032659 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/0.log" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.036936 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16" exitCode=1 Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.037035 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16"} Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.038319 4898 scope.go:117] "RemoveContainer" containerID="f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.053135 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.068306 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.098788 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.128711 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.128801 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.128826 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.128867 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.128891 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:48Z","lastTransitionTime":"2026-01-20T03:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.132197 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.153215 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.169931 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.204542 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.228096 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.233076 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.233154 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.233178 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.233205 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.233228 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:48Z","lastTransitionTime":"2026-01-20T03:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.246154 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.268107 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.288261 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.313248 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.335043 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.336583 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.336685 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.336709 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.336765 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.336788 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:48Z","lastTransitionTime":"2026-01-20T03:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.353633 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.374990 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.394179 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.427886 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"message\\\":\\\"ry.go:140\\\\nI0120 03:49:46.999662 6220 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999686 6220 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:46.999703 6220 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999986 6220 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:47.000768 6220 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 03:49:47.000840 6220 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:47.000851 6220 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:47.000892 6220 factory.go:656] Stopping watch factory\\\\nI0120 03:49:47.000911 6220 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:47.000922 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 03:49:47.000940 6220 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.440484 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.440555 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.440583 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.440623 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.440652 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:48Z","lastTransitionTime":"2026-01-20T03:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.460591 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.475412 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.488653 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.500031 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.500191 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.500344 4898 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.500442 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:50:04.500412993 +0000 UTC m=+51.100200852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.500916 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:50:04.500866657 +0000 UTC m=+51.100654526 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.505242 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:48Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.543538 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.543587 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.543599 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.543616 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.543627 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:48Z","lastTransitionTime":"2026-01-20T03:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.601049 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.601100 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.601139 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.601243 4898 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.601308 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.601345 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.601263 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.601363 4898 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.601370 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.601383 4898 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.601328 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:50:04.601310788 +0000 UTC m=+51.201098637 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.601522 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 03:50:04.601465403 +0000 UTC m=+51.201253262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.601549 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 03:50:04.601540545 +0000 UTC m=+51.201328404 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.646270 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.646313 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.646325 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.646344 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.646357 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:48Z","lastTransitionTime":"2026-01-20T03:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.674415 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:06:29.713581185 +0000 UTC Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.721202 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.721360 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.721210 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.721421 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.721202 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:48 crc kubenswrapper[4898]: E0120 03:49:48.721495 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.748848 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.748891 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.748901 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.748915 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.748925 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:48Z","lastTransitionTime":"2026-01-20T03:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.851062 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.851108 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.851119 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.851138 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.851150 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:48Z","lastTransitionTime":"2026-01-20T03:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.953155 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.953194 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.953211 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.953232 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:48 crc kubenswrapper[4898]: I0120 03:49:48.953245 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:48Z","lastTransitionTime":"2026-01-20T03:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.041885 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/0.log" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.044840 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc"} Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.044958 4898 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.055267 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.055300 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.055309 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.055321 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.055332 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:49Z","lastTransitionTime":"2026-01-20T03:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.066690 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.088673 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.100570 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.114342 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.135175 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"message\\\":\\\"ry.go:140\\\\nI0120 03:49:46.999662 6220 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999686 6220 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:46.999703 6220 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999986 6220 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:47.000768 6220 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 03:49:47.000840 6220 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:47.000851 6220 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:47.000892 6220 factory.go:656] Stopping watch factory\\\\nI0120 03:49:47.000911 6220 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:47.000922 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 03:49:47.000940 6220 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.146310 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.157119 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.157153 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.157163 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.157178 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.157189 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:49Z","lastTransitionTime":"2026-01-20T03:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.157956 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.172261 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.197264 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.214894 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.229488 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.248798 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.259173 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.259213 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.259224 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.259243 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.259264 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:49Z","lastTransitionTime":"2026-01-20T03:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.269325 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.286680 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.305853 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:49Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.360881 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.360930 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.360941 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.360960 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.360974 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:49Z","lastTransitionTime":"2026-01-20T03:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.464274 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.464342 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.464360 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.464387 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.464407 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:49Z","lastTransitionTime":"2026-01-20T03:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.567977 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.568025 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.568038 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.568058 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.568070 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:49Z","lastTransitionTime":"2026-01-20T03:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.670589 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.670653 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.670670 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.670694 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.670712 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:49Z","lastTransitionTime":"2026-01-20T03:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.674806 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:59:42.460948632 +0000 UTC Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.772915 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.772969 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.772986 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.773014 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.773031 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:49Z","lastTransitionTime":"2026-01-20T03:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.876379 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.876472 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.876492 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.876524 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.876543 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:49Z","lastTransitionTime":"2026-01-20T03:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.997793 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.997850 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.997868 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.997895 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:49 crc kubenswrapper[4898]: I0120 03:49:49.997915 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:49Z","lastTransitionTime":"2026-01-20T03:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.052380 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/1.log" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.053920 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/0.log" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.057960 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc" exitCode=1 Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.058047 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc"} Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.058114 4898 scope.go:117] "RemoveContainer" containerID="f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.060893 4898 scope.go:117] "RemoveContainer" containerID="d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc" Jan 20 03:49:50 crc kubenswrapper[4898]: E0120 03:49:50.061181 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.075199 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.089416 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.100676 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.100727 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.100741 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.100761 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.100775 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:50Z","lastTransitionTime":"2026-01-20T03:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.107113 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.122228 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.138507 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.154483 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.172693 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.192919 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.206979 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.207035 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.207057 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.207085 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.207105 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:50Z","lastTransitionTime":"2026-01-20T03:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.220961 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.243999 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.282590 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"message\\\":\\\"ry.go:140\\\\nI0120 03:49:46.999662 6220 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999686 6220 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:46.999703 6220 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999986 6220 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:47.000768 6220 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 03:49:47.000840 6220 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:47.000851 6220 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:47.000892 6220 factory.go:656] Stopping watch factory\\\\nI0120 03:49:47.000911 6220 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:47.000922 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 03:49:47.000940 6220 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:49Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111415 6339 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111861 6339 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:49.111897 6339 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:49.111918 6339 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 03:49:49.111930 6339 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 03:49:49.111960 6339 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 03:49:49.111956 6339 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03:49:49.111986 6339 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 03:49:49.111971 6339 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 03:49:49.112002 6339 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 03:49:49.112004 6339 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 03:49:49.112039 6339 factory.go:656] Stopping watch factory\\\\nI0120 03:49:49.112064 6339 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:49.112102 6339 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 03:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.297090 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.309591 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.309640 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.309651 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.309671 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.309684 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:50Z","lastTransitionTime":"2026-01-20T03:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.321489 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.339381 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.343710 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd"] Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.344221 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.346476 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.346736 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.358579 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.374577 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.394731 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.411851 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.413204 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.413256 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.413268 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.413288 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.413305 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:50Z","lastTransitionTime":"2026-01-20T03:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.423229 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b457daf-f965-454f-b073-093908ec2385-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.423277 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b457daf-f965-454f-b073-093908ec2385-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.423306 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrj8n\" (UniqueName: \"kubernetes.io/projected/3b457daf-f965-454f-b073-093908ec2385-kube-api-access-rrj8n\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.423344 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b457daf-f965-454f-b073-093908ec2385-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.439380 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"message\\\":\\\"ry.go:140\\\\nI0120 03:49:46.999662 6220 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999686 6220 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:46.999703 6220 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999986 6220 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:47.000768 6220 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 03:49:47.000840 6220 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:47.000851 6220 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:47.000892 6220 factory.go:656] Stopping watch factory\\\\nI0120 03:49:47.000911 6220 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:47.000922 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 03:49:47.000940 6220 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:49Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111415 6339 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111861 6339 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:49.111897 6339 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:49.111918 6339 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 03:49:49.111930 6339 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 03:49:49.111960 6339 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 03:49:49.111956 6339 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03:49:49.111986 6339 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 03:49:49.111971 6339 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 03:49:49.112002 6339 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 03:49:49.112004 6339 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 03:49:49.112039 6339 factory.go:656] Stopping watch factory\\\\nI0120 03:49:49.112064 6339 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:49.112102 6339 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 03:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.455918 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.474140 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.496288 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.514040 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.515949 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.516015 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.516038 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.516065 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.516086 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:50Z","lastTransitionTime":"2026-01-20T03:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.524147 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b457daf-f965-454f-b073-093908ec2385-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.524224 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b457daf-f965-454f-b073-093908ec2385-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.524270 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrj8n\" (UniqueName: \"kubernetes.io/projected/3b457daf-f965-454f-b073-093908ec2385-kube-api-access-rrj8n\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.524329 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b457daf-f965-454f-b073-093908ec2385-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.526019 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b457daf-f965-454f-b073-093908ec2385-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.526070 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b457daf-f965-454f-b073-093908ec2385-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.537867 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.538800 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b457daf-f965-454f-b073-093908ec2385-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.548247 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrj8n\" (UniqueName: \"kubernetes.io/projected/3b457daf-f965-454f-b073-093908ec2385-kube-api-access-rrj8n\") pod \"ovnkube-control-plane-749d76644c-nz5wd\" (UID: \"3b457daf-f965-454f-b073-093908ec2385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.572486 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.593831 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.610151 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.619058 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.619127 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.619147 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.619173 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.619191 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:50Z","lastTransitionTime":"2026-01-20T03:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.634113 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.655705 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.662786 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.674912 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:04:21.244301989 +0000 UTC Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.684332 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.708783 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:50Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.725662 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.725788 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:50 crc kubenswrapper[4898]: E0120 03:49:50.725930 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.725954 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:50 crc kubenswrapper[4898]: E0120 03:49:50.726082 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:50 crc kubenswrapper[4898]: E0120 03:49:50.726104 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.727553 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.727594 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.727635 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.727660 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.727681 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:50Z","lastTransitionTime":"2026-01-20T03:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.830843 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.830886 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.830897 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.830915 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.830926 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:50Z","lastTransitionTime":"2026-01-20T03:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.933034 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.933073 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.933082 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.933093 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:50 crc kubenswrapper[4898]: I0120 03:49:50.933103 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:50Z","lastTransitionTime":"2026-01-20T03:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.035881 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.035947 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.035963 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.035986 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.036003 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:51Z","lastTransitionTime":"2026-01-20T03:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.066480 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/1.log" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.077658 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" event={"ID":"3b457daf-f965-454f-b073-093908ec2385","Type":"ContainerStarted","Data":"f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.077728 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" event={"ID":"3b457daf-f965-454f-b073-093908ec2385","Type":"ContainerStarted","Data":"a172cb84b12d7a7517c2d7cf2b1feb2949ef0e9772c99d9b9cce18cfbe25c54b"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.098672 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.120349 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.138781 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.138847 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.138867 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.138894 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.138915 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:51Z","lastTransitionTime":"2026-01-20T03:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.146222 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.167943 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.185569 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.207180 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.233769 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.241702 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.241763 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.241783 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.241810 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.241835 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:51Z","lastTransitionTime":"2026-01-20T03:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.260920 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.279844 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.296257 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.320614 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"message\\\":\\\"ry.go:140\\\\nI0120 03:49:46.999662 6220 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999686 6220 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:46.999703 6220 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999986 6220 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:47.000768 6220 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 03:49:47.000840 6220 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:47.000851 6220 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:47.000892 6220 factory.go:656] Stopping watch factory\\\\nI0120 03:49:47.000911 6220 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:47.000922 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 03:49:47.000940 6220 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:49Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111415 6339 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111861 6339 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:49.111897 6339 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:49.111918 6339 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 03:49:49.111930 6339 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 03:49:49.111960 6339 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 03:49:49.111956 6339 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03:49:49.111986 6339 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 03:49:49.111971 6339 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 03:49:49.112002 6339 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 03:49:49.112004 6339 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 03:49:49.112039 6339 factory.go:656] Stopping watch factory\\\\nI0120 03:49:49.112064 6339 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:49.112102 6339 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 03:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.338616 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.344007 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.344055 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.344069 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.344097 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.344115 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:51Z","lastTransitionTime":"2026-01-20T03:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.361694 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.379398 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.413794 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.433617 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:51Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.446819 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.446871 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.446892 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.446922 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.446942 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:51Z","lastTransitionTime":"2026-01-20T03:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.550070 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.550108 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.550119 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.550135 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.550146 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:51Z","lastTransitionTime":"2026-01-20T03:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.652912 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.653000 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.653018 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.653045 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.653066 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:51Z","lastTransitionTime":"2026-01-20T03:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.675489 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:10:21.738978222 +0000 UTC Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.755314 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.755350 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.755361 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.755378 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.755390 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:51Z","lastTransitionTime":"2026-01-20T03:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.858375 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.858485 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.858506 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.858538 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.858562 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:51Z","lastTransitionTime":"2026-01-20T03:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.962219 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.962290 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.962308 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.962341 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:51 crc kubenswrapper[4898]: I0120 03:49:51.962363 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:51Z","lastTransitionTime":"2026-01-20T03:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.065952 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.066013 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.066031 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.066064 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.066084 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:52Z","lastTransitionTime":"2026-01-20T03:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.088137 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" event={"ID":"3b457daf-f965-454f-b073-093908ec2385","Type":"ContainerStarted","Data":"970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c"} Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.170178 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.170269 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.170289 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.170322 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.170349 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:52Z","lastTransitionTime":"2026-01-20T03:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.274084 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.274133 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.274151 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.274175 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.274194 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:52Z","lastTransitionTime":"2026-01-20T03:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.282330 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5hkf9"] Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.283045 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:52 crc kubenswrapper[4898]: E0120 03:49:52.283138 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.311576 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.339138 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.344238 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttn4w\" (UniqueName: \"kubernetes.io/projected/e93f051c-f83c-4d27-a695-dd5a33e979f4-kube-api-access-ttn4w\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.344330 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.358618 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.377844 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.377888 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.377905 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.377933 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.377951 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:52Z","lastTransitionTime":"2026-01-20T03:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.379621 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.401201 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.421896 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.441703 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.445161 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttn4w\" (UniqueName: \"kubernetes.io/projected/e93f051c-f83c-4d27-a695-dd5a33e979f4-kube-api-access-ttn4w\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.445647 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:52 crc kubenswrapper[4898]: E0120 03:49:52.445818 4898 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:49:52 crc kubenswrapper[4898]: E0120 03:49:52.445907 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs podName:e93f051c-f83c-4d27-a695-dd5a33e979f4 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:52.945884057 +0000 UTC m=+39.545671946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs") pod "network-metrics-daemon-5hkf9" (UID: "e93f051c-f83c-4d27-a695-dd5a33e979f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.473878 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"message\\\":\\\"ry.go:140\\\\nI0120 03:49:46.999662 6220 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999686 6220 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:46.999703 6220 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999986 6220 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:47.000768 6220 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 03:49:47.000840 6220 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:47.000851 6220 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:47.000892 6220 factory.go:656] Stopping watch factory\\\\nI0120 03:49:47.000911 6220 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:47.000922 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 03:49:47.000940 6220 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:49Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111415 6339 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111861 6339 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:49.111897 6339 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:49.111918 6339 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 03:49:49.111930 6339 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 03:49:49.111960 6339 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 03:49:49.111956 6339 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03:49:49.111986 6339 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 03:49:49.111971 6339 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 03:49:49.112002 6339 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 03:49:49.112004 6339 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 03:49:49.112039 6339 factory.go:656] Stopping watch factory\\\\nI0120 03:49:49.112064 6339 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:49.112102 6339 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 03:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.474129 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttn4w\" (UniqueName: \"kubernetes.io/projected/e93f051c-f83c-4d27-a695-dd5a33e979f4-kube-api-access-ttn4w\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.480804 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.480858 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.480880 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.480910 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.480934 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:52Z","lastTransitionTime":"2026-01-20T03:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.492505 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.525083 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.547570 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.565271 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.584238 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.584286 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.584308 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.584337 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.584359 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:52Z","lastTransitionTime":"2026-01-20T03:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.591831 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.615048 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.638984 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.658588 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.675213 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:52Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.676219 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:46:05.617760583 +0000 UTC Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.688118 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.688179 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.688197 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.688226 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.688244 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:52Z","lastTransitionTime":"2026-01-20T03:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.720525 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.720533 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:52 crc kubenswrapper[4898]: E0120 03:49:52.720750 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.720536 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:52 crc kubenswrapper[4898]: E0120 03:49:52.720869 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:52 crc kubenswrapper[4898]: E0120 03:49:52.721003 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.791261 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.791338 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.791357 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.791386 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.791407 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:52Z","lastTransitionTime":"2026-01-20T03:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.894862 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.894914 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.894932 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.894954 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.894971 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:52Z","lastTransitionTime":"2026-01-20T03:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.953017 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:52 crc kubenswrapper[4898]: E0120 03:49:52.953231 4898 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:49:52 crc kubenswrapper[4898]: E0120 03:49:52.953315 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs podName:e93f051c-f83c-4d27-a695-dd5a33e979f4 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:53.95329705 +0000 UTC m=+40.553084929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs") pod "network-metrics-daemon-5hkf9" (UID: "e93f051c-f83c-4d27-a695-dd5a33e979f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.998757 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.998825 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.998841 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.998867 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:52 crc kubenswrapper[4898]: I0120 03:49:52.998882 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:52Z","lastTransitionTime":"2026-01-20T03:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.102384 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.102487 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.102512 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.102541 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.102563 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:53Z","lastTransitionTime":"2026-01-20T03:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.206617 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.207030 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.207047 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.207075 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.207093 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:53Z","lastTransitionTime":"2026-01-20T03:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.311199 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.311261 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.311280 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.311305 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.311324 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:53Z","lastTransitionTime":"2026-01-20T03:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.414678 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.414770 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.414788 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.414822 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.414867 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:53Z","lastTransitionTime":"2026-01-20T03:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.518034 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.518150 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.518170 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.518195 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.518216 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:53Z","lastTransitionTime":"2026-01-20T03:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.622071 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.622124 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.622139 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.622163 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.622180 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:53Z","lastTransitionTime":"2026-01-20T03:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.677098 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:05:37.445300317 +0000 UTC Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.720808 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:53 crc kubenswrapper[4898]: E0120 03:49:53.721012 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.725469 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.725529 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.725549 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.725579 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.725599 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:53Z","lastTransitionTime":"2026-01-20T03:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.742085 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.763383 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.784020 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.802764 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.828616 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.828695 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.828723 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.828755 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.828780 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:53Z","lastTransitionTime":"2026-01-20T03:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.831517 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.838970 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.840067 4898 scope.go:117] "RemoveContainer" containerID="d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc" Jan 20 03:49:53 crc kubenswrapper[4898]: E0120 03:49:53.840298 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.863308 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.892913 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0774a9bba570a603541511d4fe27b95f2c8211b652b6d86cbc3082a31f0ed16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"message\\\":\\\"ry.go:140\\\\nI0120 03:49:46.999662 6220 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999686 6220 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:46.999703 6220 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:46.999986 6220 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:49:47.000768 6220 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 03:49:47.000840 6220 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:47.000851 6220 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:47.000892 6220 factory.go:656] Stopping watch factory\\\\nI0120 03:49:47.000911 6220 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:47.000922 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 03:49:47.000940 6220 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:49Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111415 6339 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111861 6339 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:49.111897 6339 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:49.111918 6339 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 03:49:49.111930 6339 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 03:49:49.111960 6339 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 03:49:49.111956 6339 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03:49:49.111986 6339 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 03:49:49.111971 6339 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 03:49:49.112002 6339 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 03:49:49.112004 6339 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 03:49:49.112039 6339 factory.go:656] Stopping watch factory\\\\nI0120 03:49:49.112064 6339 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:49.112102 6339 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 03:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.908373 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.928012 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.932889 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.933043 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.933128 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.933224 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.933307 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:53Z","lastTransitionTime":"2026-01-20T03:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.948527 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.964559 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:53 crc kubenswrapper[4898]: E0120 03:49:53.964904 4898 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:49:53 crc kubenswrapper[4898]: E0120 03:49:53.965006 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs podName:e93f051c-f83c-4d27-a695-dd5a33e979f4 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:55.964979796 +0000 UTC m=+42.564767685 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs") pod "network-metrics-daemon-5hkf9" (UID: "e93f051c-f83c-4d27-a695-dd5a33e979f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.965614 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:53 crc kubenswrapper[4898]: I0120 03:49:53.998881 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:53Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.027073 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.036686 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.036728 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.036743 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.036763 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.036778 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:54Z","lastTransitionTime":"2026-01-20T03:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.053602 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.074542 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.091847 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.114807 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.135497 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.140404 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.140487 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.140504 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.140528 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.140546 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:54Z","lastTransitionTime":"2026-01-20T03:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.157068 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.180351 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.202376 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.224278 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.243774 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.243869 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.243890 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.243923 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.243944 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:54Z","lastTransitionTime":"2026-01-20T03:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.246660 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.267840 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:49Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111415 6339 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111861 6339 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:49.111897 6339 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:49.111918 6339 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 03:49:49.111930 6339 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 03:49:49.111960 6339 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 03:49:49.111956 6339 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03:49:49.111986 6339 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 03:49:49.111971 6339 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 03:49:49.112002 6339 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 03:49:49.112004 6339 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 03:49:49.112039 6339 factory.go:656] Stopping watch factory\\\\nI0120 03:49:49.112064 6339 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:49.112102 6339 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 03:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.282013 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.300089 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.316201 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.341209 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.346672 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.346723 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.346744 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.346775 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.346795 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:54Z","lastTransitionTime":"2026-01-20T03:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.364867 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.387275 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.404642 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.422905 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.441781 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.450050 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.450144 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.450166 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.450227 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.450251 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:54Z","lastTransitionTime":"2026-01-20T03:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.465491 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:54Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.554126 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.554198 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.554220 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.554250 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.554271 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:54Z","lastTransitionTime":"2026-01-20T03:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.658316 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.658383 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.658400 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.658426 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.658488 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:54Z","lastTransitionTime":"2026-01-20T03:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.678136 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:59:55.993274672 +0000 UTC Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.720565 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.720567 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.720901 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:54 crc kubenswrapper[4898]: E0120 03:49:54.720741 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:54 crc kubenswrapper[4898]: E0120 03:49:54.721194 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:54 crc kubenswrapper[4898]: E0120 03:49:54.721261 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.761849 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.761900 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.761921 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.761947 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.761966 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:54Z","lastTransitionTime":"2026-01-20T03:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.865993 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.866076 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.866094 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.866124 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.866146 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:54Z","lastTransitionTime":"2026-01-20T03:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.969469 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.969538 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.969556 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.969588 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:54 crc kubenswrapper[4898]: I0120 03:49:54.969607 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:54Z","lastTransitionTime":"2026-01-20T03:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.073512 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.073602 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.073626 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.073659 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.073684 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.176968 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.177010 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.177022 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.177040 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.177055 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.280565 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.280602 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.280614 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.280629 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.280640 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.384821 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.384913 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.384934 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.384965 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.384992 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.486118 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.486228 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.486250 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.486277 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.486297 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: E0120 03:49:55.514956 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:55Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.521155 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.521248 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.521271 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.521306 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.521331 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: E0120 03:49:55.543133 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:55Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.548801 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.548889 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.548908 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.548937 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.548986 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: E0120 03:49:55.572608 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:55Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.578893 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.578946 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.578965 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.578993 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.579011 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: E0120 03:49:55.602938 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:55Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.608945 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.609068 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.609088 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.609158 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.609180 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: E0120 03:49:55.632769 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:49:55Z is after 2025-08-24T17:21:41Z" Jan 20 03:49:55 crc kubenswrapper[4898]: E0120 03:49:55.633107 4898 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.635702 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.635807 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.635828 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.635860 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.635882 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.678814 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:30:53.031259779 +0000 UTC Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.720704 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:55 crc kubenswrapper[4898]: E0120 03:49:55.720926 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.740479 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.740571 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.740591 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.740697 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.740726 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.843852 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.844025 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.844051 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.844121 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.844151 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.948302 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.948374 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.948392 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.948419 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.948464 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:55Z","lastTransitionTime":"2026-01-20T03:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:55 crc kubenswrapper[4898]: I0120 03:49:55.991353 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:55 crc kubenswrapper[4898]: E0120 03:49:55.991694 4898 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:49:55 crc kubenswrapper[4898]: E0120 03:49:55.991811 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs podName:e93f051c-f83c-4d27-a695-dd5a33e979f4 nodeName:}" failed. No retries permitted until 2026-01-20 03:49:59.991779676 +0000 UTC m=+46.591567565 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs") pod "network-metrics-daemon-5hkf9" (UID: "e93f051c-f83c-4d27-a695-dd5a33e979f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.051361 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.051425 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.051480 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.051519 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.051537 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:56Z","lastTransitionTime":"2026-01-20T03:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.154604 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.154661 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.154679 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.154701 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.154720 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:56Z","lastTransitionTime":"2026-01-20T03:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.257593 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.257663 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.257680 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.257711 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.257786 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:56Z","lastTransitionTime":"2026-01-20T03:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.361367 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.361465 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.361487 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.361515 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.361533 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:56Z","lastTransitionTime":"2026-01-20T03:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.464306 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.464361 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.464380 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.464403 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.464420 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:56Z","lastTransitionTime":"2026-01-20T03:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.566272 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.566343 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.566362 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.566388 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.566406 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:56Z","lastTransitionTime":"2026-01-20T03:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.669380 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.669419 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.669457 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.669473 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.669514 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:56Z","lastTransitionTime":"2026-01-20T03:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.679351 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:48:36.895556393 +0000 UTC Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.720258 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.720314 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.720393 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:56 crc kubenswrapper[4898]: E0120 03:49:56.720603 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:56 crc kubenswrapper[4898]: E0120 03:49:56.720829 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:56 crc kubenswrapper[4898]: E0120 03:49:56.720936 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.772706 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.772764 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.772782 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.772807 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.772825 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:56Z","lastTransitionTime":"2026-01-20T03:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.876201 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.876248 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.876265 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.876290 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.876308 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:56Z","lastTransitionTime":"2026-01-20T03:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.979378 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.979469 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.979494 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.979528 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:56 crc kubenswrapper[4898]: I0120 03:49:56.979556 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:56Z","lastTransitionTime":"2026-01-20T03:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.083011 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.083107 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.083128 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.083152 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.083172 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:57Z","lastTransitionTime":"2026-01-20T03:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.186020 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.186086 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.186110 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.186133 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.186148 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:57Z","lastTransitionTime":"2026-01-20T03:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.289321 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.289388 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.289402 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.289422 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.289457 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:57Z","lastTransitionTime":"2026-01-20T03:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.391872 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.391940 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.391959 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.391986 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.392007 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:57Z","lastTransitionTime":"2026-01-20T03:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.494947 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.494981 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.494993 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.495010 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.495023 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:57Z","lastTransitionTime":"2026-01-20T03:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.598463 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.598517 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.598536 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.598560 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.598578 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:57Z","lastTransitionTime":"2026-01-20T03:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.680057 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:23:32.602977194 +0000 UTC Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.701468 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.701513 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.701533 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.701558 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.701575 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:57Z","lastTransitionTime":"2026-01-20T03:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.720605 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:57 crc kubenswrapper[4898]: E0120 03:49:57.720795 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.804336 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.804404 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.804424 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.804475 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.804494 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:57Z","lastTransitionTime":"2026-01-20T03:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.907866 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.907930 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.907948 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.907974 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:57 crc kubenswrapper[4898]: I0120 03:49:57.907993 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:57Z","lastTransitionTime":"2026-01-20T03:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.011032 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.011152 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.011182 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.011221 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.011249 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:58Z","lastTransitionTime":"2026-01-20T03:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.116801 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.116871 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.116894 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.116927 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.116948 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:58Z","lastTransitionTime":"2026-01-20T03:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.219930 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.219966 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.219978 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.220019 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.220033 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:58Z","lastTransitionTime":"2026-01-20T03:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.323809 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.323864 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.323883 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.323907 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.323924 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:58Z","lastTransitionTime":"2026-01-20T03:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.428315 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.428389 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.428413 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.428470 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.428489 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:58Z","lastTransitionTime":"2026-01-20T03:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.575579 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.575669 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.575692 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.575730 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.575753 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:58Z","lastTransitionTime":"2026-01-20T03:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.680169 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.680258 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.680281 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.680312 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.680337 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:58Z","lastTransitionTime":"2026-01-20T03:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.680386 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:08:43.306743033 +0000 UTC Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.720783 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.720830 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.720854 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:49:58 crc kubenswrapper[4898]: E0120 03:49:58.720956 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:49:58 crc kubenswrapper[4898]: E0120 03:49:58.721064 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:49:58 crc kubenswrapper[4898]: E0120 03:49:58.721294 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.784352 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.784418 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.784450 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.784472 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.784487 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:58Z","lastTransitionTime":"2026-01-20T03:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.887458 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.887519 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.887543 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.887571 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.887591 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:58Z","lastTransitionTime":"2026-01-20T03:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.991371 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.991428 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.991480 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.991507 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:58 crc kubenswrapper[4898]: I0120 03:49:58.991527 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:58Z","lastTransitionTime":"2026-01-20T03:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.094712 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.094775 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.094794 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.094825 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.094851 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:59Z","lastTransitionTime":"2026-01-20T03:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.198044 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.198102 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.198115 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.198134 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.198147 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:59Z","lastTransitionTime":"2026-01-20T03:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.301535 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.301605 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.301625 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.301651 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.301674 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:59Z","lastTransitionTime":"2026-01-20T03:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.403896 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.404002 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.404021 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.404048 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.404066 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:59Z","lastTransitionTime":"2026-01-20T03:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.510093 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.510187 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.510218 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.510258 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.510285 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:59Z","lastTransitionTime":"2026-01-20T03:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.613572 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.613638 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.613661 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.613690 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.613709 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:59Z","lastTransitionTime":"2026-01-20T03:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.681618 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:34:42.651990719 +0000 UTC Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.716163 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.716212 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.716229 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.716252 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.716270 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:59Z","lastTransitionTime":"2026-01-20T03:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.720552 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:49:59 crc kubenswrapper[4898]: E0120 03:49:59.720715 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.819385 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.819475 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.819489 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.819506 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.819520 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:59Z","lastTransitionTime":"2026-01-20T03:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.926125 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.926198 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.926224 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.926258 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:49:59 crc kubenswrapper[4898]: I0120 03:49:59.926333 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:49:59Z","lastTransitionTime":"2026-01-20T03:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.029542 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.029594 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.029608 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.029629 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.029643 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:00Z","lastTransitionTime":"2026-01-20T03:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.042620 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:00 crc kubenswrapper[4898]: E0120 03:50:00.042821 4898 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:50:00 crc kubenswrapper[4898]: E0120 03:50:00.042901 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs podName:e93f051c-f83c-4d27-a695-dd5a33e979f4 nodeName:}" failed. No retries permitted until 2026-01-20 03:50:08.04288022 +0000 UTC m=+54.642668089 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs") pod "network-metrics-daemon-5hkf9" (UID: "e93f051c-f83c-4d27-a695-dd5a33e979f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.131982 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.132034 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.132049 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.132067 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.132080 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:00Z","lastTransitionTime":"2026-01-20T03:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.236104 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.236176 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.236199 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.236232 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.236255 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:00Z","lastTransitionTime":"2026-01-20T03:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.339087 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.339166 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.339191 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.339222 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.339245 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:00Z","lastTransitionTime":"2026-01-20T03:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.448672 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.448747 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.448770 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.448797 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.448814 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:00Z","lastTransitionTime":"2026-01-20T03:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.552295 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.552376 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.552396 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.552468 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.552498 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:00Z","lastTransitionTime":"2026-01-20T03:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.655617 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.655670 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.655683 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.655703 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.655715 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:00Z","lastTransitionTime":"2026-01-20T03:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.682552 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:02:48.166916941 +0000 UTC Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.721166 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.721247 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.721166 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:00 crc kubenswrapper[4898]: E0120 03:50:00.721421 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:00 crc kubenswrapper[4898]: E0120 03:50:00.721575 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:00 crc kubenswrapper[4898]: E0120 03:50:00.721750 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.758534 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.758571 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.758581 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.758598 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.758609 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:00Z","lastTransitionTime":"2026-01-20T03:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.860787 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.860825 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.860837 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.860860 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.860877 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:00Z","lastTransitionTime":"2026-01-20T03:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.964217 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.964558 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.964663 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.964798 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:00 crc kubenswrapper[4898]: I0120 03:50:00.964899 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:00Z","lastTransitionTime":"2026-01-20T03:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.067821 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.067905 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.067924 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.067957 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.067977 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:01Z","lastTransitionTime":"2026-01-20T03:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.170839 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.170934 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.170953 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.170980 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.170997 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:01Z","lastTransitionTime":"2026-01-20T03:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.274356 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.274469 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.274491 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.274524 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.274543 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:01Z","lastTransitionTime":"2026-01-20T03:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.377796 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.377901 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.377921 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.377950 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.377968 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:01Z","lastTransitionTime":"2026-01-20T03:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.481423 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.481516 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.481537 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.481566 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.481587 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:01Z","lastTransitionTime":"2026-01-20T03:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.584747 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.584836 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.584856 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.584893 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.584917 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:01Z","lastTransitionTime":"2026-01-20T03:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.683263 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:08:09.647912152 +0000 UTC Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.689268 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.689358 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.689380 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.689416 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.689466 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:01Z","lastTransitionTime":"2026-01-20T03:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.720707 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:01 crc kubenswrapper[4898]: E0120 03:50:01.720984 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.794233 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.794615 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.794753 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.794883 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.795012 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:01Z","lastTransitionTime":"2026-01-20T03:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.899132 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.899197 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.899217 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.899251 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:01 crc kubenswrapper[4898]: I0120 03:50:01.899277 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:01Z","lastTransitionTime":"2026-01-20T03:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.002759 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.002841 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.002865 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.002898 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.002921 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:02Z","lastTransitionTime":"2026-01-20T03:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.106639 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.106716 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.106734 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.106765 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.106782 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:02Z","lastTransitionTime":"2026-01-20T03:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.210306 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.210757 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.210924 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.211094 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.211234 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:02Z","lastTransitionTime":"2026-01-20T03:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.316662 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.316766 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.316791 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.316832 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.316857 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:02Z","lastTransitionTime":"2026-01-20T03:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.428184 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.428465 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.428486 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.428515 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.428542 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:02Z","lastTransitionTime":"2026-01-20T03:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.533646 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.533704 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.533723 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.533754 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.533777 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:02Z","lastTransitionTime":"2026-01-20T03:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.637106 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.637167 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.637188 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.637213 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.637231 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:02Z","lastTransitionTime":"2026-01-20T03:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.683519 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:09:15.237295574 +0000 UTC Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.721018 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.721076 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.721108 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:02 crc kubenswrapper[4898]: E0120 03:50:02.721268 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:02 crc kubenswrapper[4898]: E0120 03:50:02.721383 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:02 crc kubenswrapper[4898]: E0120 03:50:02.721486 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.740913 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.740948 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.740959 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.740975 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.740987 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:02Z","lastTransitionTime":"2026-01-20T03:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.844727 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.844802 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.844826 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.845264 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.845319 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:02Z","lastTransitionTime":"2026-01-20T03:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.948481 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.948548 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.948566 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.948593 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:02 crc kubenswrapper[4898]: I0120 03:50:02.948612 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:02Z","lastTransitionTime":"2026-01-20T03:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.051917 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.051989 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.052013 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.052042 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.052063 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:03Z","lastTransitionTime":"2026-01-20T03:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.155560 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.155631 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.155650 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.155678 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.155697 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:03Z","lastTransitionTime":"2026-01-20T03:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.258537 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.258592 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.258610 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.258635 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.258657 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:03Z","lastTransitionTime":"2026-01-20T03:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.361722 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.361782 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.361804 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.361828 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.361848 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:03Z","lastTransitionTime":"2026-01-20T03:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.465041 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.465154 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.465211 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.465248 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.465308 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:03Z","lastTransitionTime":"2026-01-20T03:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.567918 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.567991 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.568015 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.568042 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.568062 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:03Z","lastTransitionTime":"2026-01-20T03:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.672299 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.672362 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.672379 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.672404 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.672420 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:03Z","lastTransitionTime":"2026-01-20T03:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.684117 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 15:34:37.089871784 +0000 UTC Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.721154 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:03 crc kubenswrapper[4898]: E0120 03:50:03.721338 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.760582 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:03Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.776261 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.776348 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.776370 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.776395 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.776466 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:03Z","lastTransitionTime":"2026-01-20T03:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.779493 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:03Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.796306 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:03Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.823031 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:03Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.845869 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:03Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.870476 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:03Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.880424 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.880537 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.880562 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.880598 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.880625 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:03Z","lastTransitionTime":"2026-01-20T03:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.893963 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:03Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.913905 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:03Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.936259 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:03Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.958593 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:03Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.977709 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:03Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.990041 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.990127 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.990147 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.990180 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:03 crc kubenswrapper[4898]: I0120 03:50:03.990199 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:03Z","lastTransitionTime":"2026-01-20T03:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.009074 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:04Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.029766 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:04Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.049930 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:04Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.082841 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:49Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111415 6339 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111861 6339 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:49.111897 6339 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:49.111918 6339 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 03:49:49.111930 6339 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 03:49:49.111960 6339 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 03:49:49.111956 6339 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03:49:49.111986 6339 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 03:49:49.111971 6339 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 03:49:49.112002 6339 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 03:49:49.112004 6339 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 03:49:49.112039 6339 factory.go:656] Stopping watch factory\\\\nI0120 03:49:49.112064 6339 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:49.112102 6339 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 03:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:04Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.093135 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.093218 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.093241 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.093274 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.093297 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:04Z","lastTransitionTime":"2026-01-20T03:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.101161 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:04Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.120096 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:04Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.196098 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.196160 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.196180 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.196208 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.196228 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:04Z","lastTransitionTime":"2026-01-20T03:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.299798 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.299876 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.299896 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.299927 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.299948 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:04Z","lastTransitionTime":"2026-01-20T03:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.403116 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.403187 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.403210 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.403240 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.403264 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:04Z","lastTransitionTime":"2026-01-20T03:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.506607 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.506674 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.506697 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.506728 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.506749 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:04Z","lastTransitionTime":"2026-01-20T03:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.598515 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.598742 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:50:36.598702304 +0000 UTC m=+83.198490193 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.598824 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.599022 4898 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.599195 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:50:36.599169339 +0000 UTC m=+83.198957228 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.610127 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.610202 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.610232 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.610267 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.610289 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:04Z","lastTransitionTime":"2026-01-20T03:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.685296 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:46:24.684875656 +0000 UTC Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.700351 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.700543 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.700613 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.700684 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.700731 4898 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.700832 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:50:36.700808726 +0000 UTC m=+83.300596615 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.700848 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.700738 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.700893 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.700913 4898 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.700926 4898 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.701002 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 03:50:36.700974671 +0000 UTC m=+83.300762540 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.701030 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 03:50:36.701019003 +0000 UTC m=+83.300806872 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.714269 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.714358 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.714385 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.714470 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.714544 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:04Z","lastTransitionTime":"2026-01-20T03:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.720731 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.720904 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.721155 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.721184 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.721368 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:04 crc kubenswrapper[4898]: E0120 03:50:04.721578 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.818200 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.818253 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.818275 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.818298 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.818318 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:04Z","lastTransitionTime":"2026-01-20T03:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.921777 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.921832 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.921851 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.921873 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:04 crc kubenswrapper[4898]: I0120 03:50:04.921890 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:04Z","lastTransitionTime":"2026-01-20T03:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.025739 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.025836 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.025864 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.025899 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.025920 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.129345 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.129422 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.129476 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.129508 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.129528 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.232377 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.232491 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.232511 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.232540 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.232562 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.335913 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.335984 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.336002 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.336028 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.336052 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.444946 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.445034 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.445074 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.445103 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.445123 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.548534 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.548610 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.548635 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.548659 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.548677 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.646290 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.646348 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.646370 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.646397 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.646641 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: E0120 03:50:05.669317 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:05Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.676316 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.676395 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.676415 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.676474 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.676501 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.686345 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 01:03:22.966273206 +0000 UTC Jan 20 03:50:05 crc kubenswrapper[4898]: E0120 03:50:05.701597 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:05Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.708794 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.708852 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.708874 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.708905 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.708924 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.721268 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:05 crc kubenswrapper[4898]: E0120 03:50:05.721505 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:05 crc kubenswrapper[4898]: E0120 03:50:05.730763 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:05Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.736352 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.736467 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.736498 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.736530 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.736554 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: E0120 03:50:05.760783 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:05Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.767134 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.767190 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.767208 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.767238 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.767258 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: E0120 03:50:05.790241 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:05Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:05 crc kubenswrapper[4898]: E0120 03:50:05.790400 4898 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.793044 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.793104 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.793125 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.793152 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.793171 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.896726 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.896779 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.896791 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.896810 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:05 crc kubenswrapper[4898]: I0120 03:50:05.896823 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:05Z","lastTransitionTime":"2026-01-20T03:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.000276 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.000324 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.000339 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.000361 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.000379 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:06Z","lastTransitionTime":"2026-01-20T03:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.104847 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.104939 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.104964 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.105012 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.105031 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:06Z","lastTransitionTime":"2026-01-20T03:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.208126 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.208187 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.208206 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.208258 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.208277 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:06Z","lastTransitionTime":"2026-01-20T03:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.312091 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.312170 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.312193 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.312226 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.312252 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:06Z","lastTransitionTime":"2026-01-20T03:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.415867 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.416064 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.416086 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.416113 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.416132 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:06Z","lastTransitionTime":"2026-01-20T03:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.518606 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.518663 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.518680 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.518704 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.518720 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:06Z","lastTransitionTime":"2026-01-20T03:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.622855 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.622908 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.622927 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.622951 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.622970 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:06Z","lastTransitionTime":"2026-01-20T03:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.686964 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 07:36:21.039997532 +0000 UTC Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.720368 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.720400 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:06 crc kubenswrapper[4898]: E0120 03:50:06.720617 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.720711 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:06 crc kubenswrapper[4898]: E0120 03:50:06.720910 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:06 crc kubenswrapper[4898]: E0120 03:50:06.720995 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.726394 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.726557 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.726578 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.726652 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.726674 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:06Z","lastTransitionTime":"2026-01-20T03:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.830002 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.830064 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.830082 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.830110 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.830127 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:06Z","lastTransitionTime":"2026-01-20T03:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.933601 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.933693 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.933716 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.933744 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:06 crc kubenswrapper[4898]: I0120 03:50:06.933770 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:06Z","lastTransitionTime":"2026-01-20T03:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.037864 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.037925 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.037942 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.037968 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.037989 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:07Z","lastTransitionTime":"2026-01-20T03:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.141725 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.141788 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.141805 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.141829 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.141850 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:07Z","lastTransitionTime":"2026-01-20T03:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.245559 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.245615 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.245632 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.245654 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.245671 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:07Z","lastTransitionTime":"2026-01-20T03:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.348868 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.348932 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.348949 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.348974 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.348991 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:07Z","lastTransitionTime":"2026-01-20T03:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.451704 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.451781 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.451805 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.451835 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.451857 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:07Z","lastTransitionTime":"2026-01-20T03:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.555299 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.555384 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.555406 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.555457 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.555479 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:07Z","lastTransitionTime":"2026-01-20T03:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.658152 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.658211 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.658227 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.658250 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.658266 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:07Z","lastTransitionTime":"2026-01-20T03:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.687750 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:23:19.827866116 +0000 UTC Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.721242 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:07 crc kubenswrapper[4898]: E0120 03:50:07.721473 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.762171 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.762248 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.762270 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.762302 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.762325 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:07Z","lastTransitionTime":"2026-01-20T03:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.764340 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.781943 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.785979 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:07Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.819780 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:49Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111415 6339 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111861 6339 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:49.111897 6339 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:49.111918 6339 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 03:49:49.111930 6339 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 03:49:49.111960 6339 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 03:49:49.111956 6339 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03:49:49.111986 6339 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 03:49:49.111971 6339 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 03:49:49.112002 6339 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 03:49:49.112004 6339 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 03:49:49.112039 6339 factory.go:656] Stopping watch factory\\\\nI0120 03:49:49.112064 6339 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:49.112102 6339 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 03:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:07Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.837874 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:07Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.857893 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:07Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.866802 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.867049 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.867230 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.867402 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.867636 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:07Z","lastTransitionTime":"2026-01-20T03:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.882330 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:07Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.904673 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:07Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.937790 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:07Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.959610 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:07Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.971007 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.971265 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.971518 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.971791 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.972003 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:07Z","lastTransitionTime":"2026-01-20T03:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.977754 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:07Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:07 crc kubenswrapper[4898]: I0120 03:50:07.997680 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:07Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.023232 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:08Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.048354 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:08 crc kubenswrapper[4898]: E0120 03:50:08.048622 4898 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:50:08 crc kubenswrapper[4898]: E0120 03:50:08.048737 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs podName:e93f051c-f83c-4d27-a695-dd5a33e979f4 nodeName:}" failed. No retries permitted until 2026-01-20 03:50:24.048710837 +0000 UTC m=+70.648498726 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs") pod "network-metrics-daemon-5hkf9" (UID: "e93f051c-f83c-4d27-a695-dd5a33e979f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.049222 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:08Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.075752 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.075802 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.075820 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.075845 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.075866 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:08Z","lastTransitionTime":"2026-01-20T03:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.077384 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:08Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.099119 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:08Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.123734 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:08Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.148302 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:08Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.176341 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:08Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.179092 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.179161 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.179186 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.179221 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.179245 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:08Z","lastTransitionTime":"2026-01-20T03:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.282934 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.282988 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.283007 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.283034 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.283051 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:08Z","lastTransitionTime":"2026-01-20T03:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.386032 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.386307 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.386484 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.386644 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.386789 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:08Z","lastTransitionTime":"2026-01-20T03:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.490771 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.490843 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.490863 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.490889 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.490910 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:08Z","lastTransitionTime":"2026-01-20T03:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.594323 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.594578 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.594736 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.594883 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.595010 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:08Z","lastTransitionTime":"2026-01-20T03:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.688216 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:18:58.094258204 +0000 UTC Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.697920 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.698191 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.698332 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.698502 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.698647 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:08Z","lastTransitionTime":"2026-01-20T03:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.720608 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.720683 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:08 crc kubenswrapper[4898]: E0120 03:50:08.720755 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.720611 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:08 crc kubenswrapper[4898]: E0120 03:50:08.720842 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:08 crc kubenswrapper[4898]: E0120 03:50:08.720918 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.722370 4898 scope.go:117] "RemoveContainer" containerID="d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.802252 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.802319 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.802338 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.802364 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.802383 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:08Z","lastTransitionTime":"2026-01-20T03:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.906294 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.906361 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.906381 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.906412 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:08 crc kubenswrapper[4898]: I0120 03:50:08.906464 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:08Z","lastTransitionTime":"2026-01-20T03:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.009903 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.009952 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.009965 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.009986 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.010002 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:09Z","lastTransitionTime":"2026-01-20T03:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.112627 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.112668 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.112689 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.112707 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.112719 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:09Z","lastTransitionTime":"2026-01-20T03:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.159566 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/1.log" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.162944 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd"} Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.163827 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.184581 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.203957 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.215635 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.215688 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.215705 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.215730 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.215749 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:09Z","lastTransitionTime":"2026-01-20T03:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.224595 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.243617 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.276058 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:49Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111415 6339 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111861 6339 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:49.111897 6339 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:49.111918 6339 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 03:49:49.111930 6339 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 03:49:49.111960 6339 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 03:49:49.111956 6339 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03:49:49.111986 6339 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 03:49:49.111971 6339 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 03:49:49.112002 6339 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 03:49:49.112004 6339 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 03:49:49.112039 6339 factory.go:656] Stopping watch factory\\\\nI0120 03:49:49.112064 6339 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:49.112102 6339 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 03:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.294354 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.318730 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.318790 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.318810 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.318837 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.318859 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:09Z","lastTransitionTime":"2026-01-20T03:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.320891 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.340551 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.360303 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.373504 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.399768 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.411405 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.426081 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.426136 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.426151 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.426172 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.426190 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:09Z","lastTransitionTime":"2026-01-20T03:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.429141 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.442901 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.462766 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.480424 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.501542 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.518461 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:09Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.529279 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.529354 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.529378 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.529408 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.529464 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:09Z","lastTransitionTime":"2026-01-20T03:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.632447 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.632691 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.632776 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.632887 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.632961 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:09Z","lastTransitionTime":"2026-01-20T03:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.688618 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:28:47.751790178 +0000 UTC Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.721047 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:09 crc kubenswrapper[4898]: E0120 03:50:09.721285 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.735146 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.735203 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.735218 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.735236 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.735251 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:09Z","lastTransitionTime":"2026-01-20T03:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.837838 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.837938 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.837958 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.837985 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.838001 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:09Z","lastTransitionTime":"2026-01-20T03:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.940487 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.940536 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.940547 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.940566 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:09 crc kubenswrapper[4898]: I0120 03:50:09.940578 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:09Z","lastTransitionTime":"2026-01-20T03:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.043185 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.043233 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.043251 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.043280 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.043298 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:10Z","lastTransitionTime":"2026-01-20T03:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.146057 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.146127 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.146147 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.146173 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.146193 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:10Z","lastTransitionTime":"2026-01-20T03:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.168708 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/2.log" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.169692 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/1.log" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.173638 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd" exitCode=1 Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.173683 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd"} Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.173729 4898 scope.go:117] "RemoveContainer" containerID="d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.174590 4898 scope.go:117] "RemoveContainer" containerID="0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd" Jan 20 03:50:10 crc kubenswrapper[4898]: E0120 03:50:10.174784 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.203149 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.223095 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.242541 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.249699 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.249768 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.249794 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.249833 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.249859 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:10Z","lastTransitionTime":"2026-01-20T03:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.262730 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.283950 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.304848 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.323545 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.352767 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.352827 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.352845 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.352870 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.352888 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:10Z","lastTransitionTime":"2026-01-20T03:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.357712 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37187bbf7a307e54b166839b6b9f177b0f5667aafad525fbc17a6ae183d83dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:49:49Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111415 6339 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:49:49.111861 6339 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 03:49:49.111897 6339 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 03:49:49.111918 6339 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 03:49:49.111930 6339 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 03:49:49.111960 6339 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 03:49:49.111956 6339 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 03:49:49.111986 6339 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 03:49:49.111971 6339 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 03:49:49.112002 6339 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 03:49:49.112004 6339 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 03:49:49.112039 6339 factory.go:656] Stopping watch factory\\\\nI0120 03:49:49.112064 6339 ovnkube.go:599] Stopped ovnkube\\\\nI0120 03:49:49.112102 6339 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 03:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:09Z\\\",\\\"message\\\":\\\"snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:09.806284 6594 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806379 6594 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806557 6594 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 03:50:09.806631 6594 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.807042 6594 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 03:50:09.807055 6594 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 03:50:09.807079 6594 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 03:50:09.807084 6594 factory.go:656] Stopping watch factory\\\\nI0120 03:50:09.807104 6594 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.375945 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.394673 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.415330 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.432161 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.456536 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.456598 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.456618 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.456645 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.456665 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:10Z","lastTransitionTime":"2026-01-20T03:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.467892 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.492039 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.517076 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.538324 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.556500 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.564387 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.564481 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.564503 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.564534 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.564562 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:10Z","lastTransitionTime":"2026-01-20T03:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.580677 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:10Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.667972 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.668080 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.668102 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.668133 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.668154 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:10Z","lastTransitionTime":"2026-01-20T03:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.689554 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:11:03.517972882 +0000 UTC Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.721171 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.721330 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.721218 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:10 crc kubenswrapper[4898]: E0120 03:50:10.721405 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:10 crc kubenswrapper[4898]: E0120 03:50:10.721609 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:10 crc kubenswrapper[4898]: E0120 03:50:10.721813 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.771235 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.771321 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.771344 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.771367 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.771388 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:10Z","lastTransitionTime":"2026-01-20T03:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.874577 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.874661 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.874684 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.874709 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.874727 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:10Z","lastTransitionTime":"2026-01-20T03:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.978782 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.978833 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.978851 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.978873 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:10 crc kubenswrapper[4898]: I0120 03:50:10.978890 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:10Z","lastTransitionTime":"2026-01-20T03:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.081419 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.081504 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.081522 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.081550 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.081576 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:11Z","lastTransitionTime":"2026-01-20T03:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.180060 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/2.log" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.183691 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.183772 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.183790 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.183813 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.183829 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:11Z","lastTransitionTime":"2026-01-20T03:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.187657 4898 scope.go:117] "RemoveContainer" containerID="0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd" Jan 20 03:50:11 crc kubenswrapper[4898]: E0120 03:50:11.188048 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.207990 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.230672 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.249566 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.281697 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:09Z\\\",\\\"message\\\":\\\"snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:09.806284 6594 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806379 6594 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806557 6594 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 03:50:09.806631 6594 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.807042 6594 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 03:50:09.807055 6594 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 03:50:09.807079 6594 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 03:50:09.807084 6594 factory.go:656] Stopping watch factory\\\\nI0120 03:50:09.807104 6594 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.287497 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.287557 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.287575 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.287601 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.287619 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:11Z","lastTransitionTime":"2026-01-20T03:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.299583 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.320704 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.339340 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.359749 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.380488 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.391330 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.391393 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.391419 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.391486 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.391527 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:11Z","lastTransitionTime":"2026-01-20T03:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.400516 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.434193 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.454121 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.471218 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.495916 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.495965 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.495982 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.496007 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.496025 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:11Z","lastTransitionTime":"2026-01-20T03:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.501095 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.523672 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.540548 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.559368 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.576018 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:11Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.599850 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.599919 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.599938 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.599963 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.599980 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:11Z","lastTransitionTime":"2026-01-20T03:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.690191 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:50:19.328941735 +0000 UTC Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.702808 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.702940 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.702963 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.702990 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.703008 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:11Z","lastTransitionTime":"2026-01-20T03:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.721191 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:11 crc kubenswrapper[4898]: E0120 03:50:11.721383 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.805917 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.805974 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.805993 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.806019 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.806036 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:11Z","lastTransitionTime":"2026-01-20T03:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.909337 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.909403 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.909421 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.909475 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:11 crc kubenswrapper[4898]: I0120 03:50:11.909494 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:11Z","lastTransitionTime":"2026-01-20T03:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.019665 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.019724 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.019743 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.019767 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.019787 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:12Z","lastTransitionTime":"2026-01-20T03:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.122011 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.122058 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.122067 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.122086 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.122103 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:12Z","lastTransitionTime":"2026-01-20T03:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.225216 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.225261 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.225270 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.225287 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.225298 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:12Z","lastTransitionTime":"2026-01-20T03:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.327968 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.328029 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.328047 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.328072 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.328089 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:12Z","lastTransitionTime":"2026-01-20T03:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.430632 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.430707 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.430726 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.430753 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.430774 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:12Z","lastTransitionTime":"2026-01-20T03:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.533808 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.533876 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.533895 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.533922 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.533945 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:12Z","lastTransitionTime":"2026-01-20T03:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.636650 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.636710 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.636729 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.636752 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.636770 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:12Z","lastTransitionTime":"2026-01-20T03:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.691160 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:31:39.401174647 +0000 UTC Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.720629 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.720657 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.720660 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:12 crc kubenswrapper[4898]: E0120 03:50:12.720828 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:12 crc kubenswrapper[4898]: E0120 03:50:12.720958 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:12 crc kubenswrapper[4898]: E0120 03:50:12.721100 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.739718 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.739780 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.739805 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.739837 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.739861 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:12Z","lastTransitionTime":"2026-01-20T03:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.843400 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.843549 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.843577 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.843607 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.843629 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:12Z","lastTransitionTime":"2026-01-20T03:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.946164 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.946239 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.946261 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.946291 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:12 crc kubenswrapper[4898]: I0120 03:50:12.946314 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:12Z","lastTransitionTime":"2026-01-20T03:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.048982 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.049049 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.049067 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.049091 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.049113 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:13Z","lastTransitionTime":"2026-01-20T03:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.152172 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.152248 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.152268 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.152294 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.152312 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:13Z","lastTransitionTime":"2026-01-20T03:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.254778 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.254832 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.254854 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.254883 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.254903 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:13Z","lastTransitionTime":"2026-01-20T03:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.357875 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.357931 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.357948 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.357971 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.357994 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:13Z","lastTransitionTime":"2026-01-20T03:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.461186 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.461296 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.461314 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.461337 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.461353 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:13Z","lastTransitionTime":"2026-01-20T03:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.564011 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.564059 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.564070 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.564085 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.564095 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:13Z","lastTransitionTime":"2026-01-20T03:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.666273 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.666331 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.666348 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.666372 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.666396 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:13Z","lastTransitionTime":"2026-01-20T03:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.692218 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:47:13.852498139 +0000 UTC Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.720951 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:13 crc kubenswrapper[4898]: E0120 03:50:13.721628 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.746263 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.770788 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.770844 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.770862 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.770886 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.770904 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:13Z","lastTransitionTime":"2026-01-20T03:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.771544 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.794497 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.811357 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.830714 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.852651 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.873837 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.873880 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.873894 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.873914 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.873928 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:13Z","lastTransitionTime":"2026-01-20T03:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.874048 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.896311 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.917421 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.941704 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.964704 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.976743 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.976840 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.976864 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.976898 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.976923 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:13Z","lastTransitionTime":"2026-01-20T03:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:13 crc kubenswrapper[4898]: I0120 03:50:13.984502 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:13Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.014898 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:09Z\\\",\\\"message\\\":\\\"snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:09.806284 6594 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806379 6594 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806557 6594 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 03:50:09.806631 6594 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.807042 6594 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 03:50:09.807055 6594 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 03:50:09.807079 6594 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 03:50:09.807084 6594 factory.go:656] Stopping watch factory\\\\nI0120 03:50:09.807104 6594 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:14Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.030299 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:14Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.045960 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:14Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.063716 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:14Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.078557 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:14Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.079763 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.079793 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.079803 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.079819 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.079830 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:14Z","lastTransitionTime":"2026-01-20T03:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.104133 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:14Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.182883 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.182943 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.182964 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.182992 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.183010 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:14Z","lastTransitionTime":"2026-01-20T03:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.287145 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.287223 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.287245 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.287277 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.287299 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:14Z","lastTransitionTime":"2026-01-20T03:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.389891 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.389949 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.389967 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.389992 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.390010 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:14Z","lastTransitionTime":"2026-01-20T03:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.493939 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.494009 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.494027 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.494053 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.494071 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:14Z","lastTransitionTime":"2026-01-20T03:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.597455 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.597872 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.597883 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.597901 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.597912 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:14Z","lastTransitionTime":"2026-01-20T03:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.692642 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:45:39.908568808 +0000 UTC Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.701284 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.701334 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.701347 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.701369 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.701383 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:14Z","lastTransitionTime":"2026-01-20T03:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.720904 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.720962 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.720908 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:14 crc kubenswrapper[4898]: E0120 03:50:14.721051 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:14 crc kubenswrapper[4898]: E0120 03:50:14.721188 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:14 crc kubenswrapper[4898]: E0120 03:50:14.721309 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.804954 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.805053 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.805078 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.805114 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.805138 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:14Z","lastTransitionTime":"2026-01-20T03:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.909012 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.909087 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.909106 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.909133 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:14 crc kubenswrapper[4898]: I0120 03:50:14.909153 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:14Z","lastTransitionTime":"2026-01-20T03:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.012769 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.012853 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.012872 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.012899 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.012918 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:15Z","lastTransitionTime":"2026-01-20T03:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.115815 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.115882 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.115900 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.115926 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.115944 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:15Z","lastTransitionTime":"2026-01-20T03:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.218670 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.218721 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.218740 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.218765 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.218786 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:15Z","lastTransitionTime":"2026-01-20T03:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.322244 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.322340 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.322352 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.322371 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.322389 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:15Z","lastTransitionTime":"2026-01-20T03:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.425821 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.425875 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.425888 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.425908 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.425920 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:15Z","lastTransitionTime":"2026-01-20T03:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.528318 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.528389 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.528407 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.528474 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.528493 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:15Z","lastTransitionTime":"2026-01-20T03:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.632084 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.632138 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.632150 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.632169 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.632184 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:15Z","lastTransitionTime":"2026-01-20T03:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.693151 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:45:33.380985325 +0000 UTC Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.720531 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:15 crc kubenswrapper[4898]: E0120 03:50:15.720692 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.733689 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.733725 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.733733 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.733746 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.733758 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:15Z","lastTransitionTime":"2026-01-20T03:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.836608 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.836661 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.836670 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.836688 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.836703 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:15Z","lastTransitionTime":"2026-01-20T03:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.939841 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.939905 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.939922 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.939950 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:15 crc kubenswrapper[4898]: I0120 03:50:15.939967 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:15Z","lastTransitionTime":"2026-01-20T03:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.043220 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.043290 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.043315 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.043346 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.043368 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.045467 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.045525 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.045548 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.045573 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.045592 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: E0120 03:50:16.070771 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:16Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.076380 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.076422 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.076476 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.076507 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.076526 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: E0120 03:50:16.095485 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:16Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.101463 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.101513 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.101531 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.101558 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.101576 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: E0120 03:50:16.115360 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:16Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.120087 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.120150 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.120168 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.120194 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.120211 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: E0120 03:50:16.140609 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:16Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.145725 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.145769 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.145785 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.145810 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.145829 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: E0120 03:50:16.162867 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:16Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:16 crc kubenswrapper[4898]: E0120 03:50:16.163109 4898 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.165322 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.165369 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.165392 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.165418 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.165461 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.268309 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.268374 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.268396 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.268425 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.268479 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.370821 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.370866 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.370884 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.370907 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.370924 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.474792 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.474875 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.474893 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.474922 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.474941 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.577962 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.578091 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.578111 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.578139 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.578157 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.681192 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.681270 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.681289 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.681818 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.681884 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.693479 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:46:51.762831773 +0000 UTC Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.720301 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.720341 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.720382 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:16 crc kubenswrapper[4898]: E0120 03:50:16.720511 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:16 crc kubenswrapper[4898]: E0120 03:50:16.720582 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:16 crc kubenswrapper[4898]: E0120 03:50:16.720693 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.784815 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.784871 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.784883 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.784900 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.784909 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.887309 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.887342 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.887350 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.887363 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.887373 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.990375 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.990488 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.990518 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.990553 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:16 crc kubenswrapper[4898]: I0120 03:50:16.990581 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:16Z","lastTransitionTime":"2026-01-20T03:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.093842 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.093903 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.093921 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.093947 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.093964 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:17Z","lastTransitionTime":"2026-01-20T03:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.197508 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.197546 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.197557 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.197571 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.197581 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:17Z","lastTransitionTime":"2026-01-20T03:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.300489 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.300536 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.300549 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.300568 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.300580 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:17Z","lastTransitionTime":"2026-01-20T03:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.410222 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.410405 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.411002 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.411066 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.411099 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:17Z","lastTransitionTime":"2026-01-20T03:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.515269 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.515339 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.515358 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.515387 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.515410 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:17Z","lastTransitionTime":"2026-01-20T03:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.618852 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.618908 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.618922 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.618941 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.618954 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:17Z","lastTransitionTime":"2026-01-20T03:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.694286 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:26:11.112699414 +0000 UTC Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.720549 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:17 crc kubenswrapper[4898]: E0120 03:50:17.720717 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.721891 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.721927 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.721936 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.721954 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.721969 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:17Z","lastTransitionTime":"2026-01-20T03:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.825448 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.825518 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.825530 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.825548 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.825561 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:17Z","lastTransitionTime":"2026-01-20T03:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.928822 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.928870 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.928886 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.928908 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:17 crc kubenswrapper[4898]: I0120 03:50:17.928923 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:17Z","lastTransitionTime":"2026-01-20T03:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.031637 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.031676 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.031684 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.031702 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.031714 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:18Z","lastTransitionTime":"2026-01-20T03:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.134769 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.134833 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.134846 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.134863 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.134875 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:18Z","lastTransitionTime":"2026-01-20T03:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.237212 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.237272 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.237289 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.237311 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.237328 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:18Z","lastTransitionTime":"2026-01-20T03:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.339149 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.339180 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.339188 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.339201 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.339210 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:18Z","lastTransitionTime":"2026-01-20T03:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.441718 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.441749 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.441758 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.441771 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.441780 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:18Z","lastTransitionTime":"2026-01-20T03:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.544277 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.544306 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.544316 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.544329 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.544338 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:18Z","lastTransitionTime":"2026-01-20T03:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.646335 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.646378 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.646389 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.646407 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.646419 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:18Z","lastTransitionTime":"2026-01-20T03:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.695142 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:17:47.432857966 +0000 UTC Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.720594 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.720605 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:18 crc kubenswrapper[4898]: E0120 03:50:18.720719 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:18 crc kubenswrapper[4898]: E0120 03:50:18.720824 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.720614 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:18 crc kubenswrapper[4898]: E0120 03:50:18.720894 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.748636 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.748682 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.748692 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.748707 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.748717 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:18Z","lastTransitionTime":"2026-01-20T03:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.851282 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.851349 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.851372 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.851401 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.851422 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:18Z","lastTransitionTime":"2026-01-20T03:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.954094 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.954133 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.954142 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.954157 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:18 crc kubenswrapper[4898]: I0120 03:50:18.954170 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:18Z","lastTransitionTime":"2026-01-20T03:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.056910 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.056944 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.056952 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.056965 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.056973 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:19Z","lastTransitionTime":"2026-01-20T03:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.159128 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.159166 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.159175 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.159191 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.159202 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:19Z","lastTransitionTime":"2026-01-20T03:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.266587 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.266660 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.266680 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.266712 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.266732 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:19Z","lastTransitionTime":"2026-01-20T03:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.369239 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.369506 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.369568 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.369628 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.369691 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:19Z","lastTransitionTime":"2026-01-20T03:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.473369 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.473480 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.473563 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.473626 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.473679 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:19Z","lastTransitionTime":"2026-01-20T03:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.576616 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.577308 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.577507 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.577639 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.577770 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:19Z","lastTransitionTime":"2026-01-20T03:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.680059 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.680105 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.680121 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.680143 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.680160 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:19Z","lastTransitionTime":"2026-01-20T03:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.695771 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:19:30.773922605 +0000 UTC Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.720598 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:19 crc kubenswrapper[4898]: E0120 03:50:19.720827 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.783308 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.783387 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.783407 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.783478 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.783507 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:19Z","lastTransitionTime":"2026-01-20T03:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.886550 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.886597 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.886608 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.886625 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.886637 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:19Z","lastTransitionTime":"2026-01-20T03:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.988992 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.989252 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.989316 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.989377 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:19 crc kubenswrapper[4898]: I0120 03:50:19.989462 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:19Z","lastTransitionTime":"2026-01-20T03:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.091959 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.091995 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.092005 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.092021 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.092031 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:20Z","lastTransitionTime":"2026-01-20T03:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.194936 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.194990 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.195005 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.195024 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.195038 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:20Z","lastTransitionTime":"2026-01-20T03:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.298282 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.298541 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.298606 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.298706 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.298773 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:20Z","lastTransitionTime":"2026-01-20T03:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.401218 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.401260 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.401269 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.401286 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.401302 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:20Z","lastTransitionTime":"2026-01-20T03:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.503151 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.503187 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.503198 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.503212 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.503222 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:20Z","lastTransitionTime":"2026-01-20T03:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.606379 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.606466 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.606485 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.606513 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.606534 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:20Z","lastTransitionTime":"2026-01-20T03:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.696484 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:08:04.242161775 +0000 UTC Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.709742 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.709784 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.709793 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.709815 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.709829 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:20Z","lastTransitionTime":"2026-01-20T03:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.721039 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.721070 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:20 crc kubenswrapper[4898]: E0120 03:50:20.721160 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:20 crc kubenswrapper[4898]: E0120 03:50:20.721228 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.721229 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:20 crc kubenswrapper[4898]: E0120 03:50:20.721281 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.812934 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.812988 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.813006 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.813030 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.813048 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:20Z","lastTransitionTime":"2026-01-20T03:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.915028 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.915085 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.915106 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.915128 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:20 crc kubenswrapper[4898]: I0120 03:50:20.915144 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:20Z","lastTransitionTime":"2026-01-20T03:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.017014 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.017053 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.017069 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.017090 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.017101 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:21Z","lastTransitionTime":"2026-01-20T03:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.119373 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.119422 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.119464 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.119485 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.119500 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:21Z","lastTransitionTime":"2026-01-20T03:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.221916 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.222014 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.222076 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.222103 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.222159 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:21Z","lastTransitionTime":"2026-01-20T03:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.324520 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.324592 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.324612 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.324644 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.324663 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:21Z","lastTransitionTime":"2026-01-20T03:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.427249 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.427305 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.427316 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.427335 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.427347 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:21Z","lastTransitionTime":"2026-01-20T03:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.530481 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.530544 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.530569 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.530585 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.530596 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:21Z","lastTransitionTime":"2026-01-20T03:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.633898 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.633957 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.633972 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.633996 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.634014 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:21Z","lastTransitionTime":"2026-01-20T03:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.696588 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:39:36.009073244 +0000 UTC Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.721013 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:21 crc kubenswrapper[4898]: E0120 03:50:21.721161 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.736726 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.736760 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.736769 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.736780 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.736789 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:21Z","lastTransitionTime":"2026-01-20T03:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.839472 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.839507 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.839518 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.839531 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.839541 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:21Z","lastTransitionTime":"2026-01-20T03:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.941457 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.941488 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.941497 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.941509 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:21 crc kubenswrapper[4898]: I0120 03:50:21.941518 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:21Z","lastTransitionTime":"2026-01-20T03:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.043489 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.043518 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.043527 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.043538 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.043547 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:22Z","lastTransitionTime":"2026-01-20T03:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.146521 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.146578 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.146597 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.146624 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.146640 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:22Z","lastTransitionTime":"2026-01-20T03:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.248802 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.248868 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.248886 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.248913 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.248932 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:22Z","lastTransitionTime":"2026-01-20T03:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.352502 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.352558 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.352570 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.352594 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.352608 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:22Z","lastTransitionTime":"2026-01-20T03:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.455119 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.455225 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.455245 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.455275 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.455458 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:22Z","lastTransitionTime":"2026-01-20T03:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.558468 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.558535 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.558552 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.558580 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.558601 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:22Z","lastTransitionTime":"2026-01-20T03:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.661132 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.661211 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.661230 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.661255 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.661299 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:22Z","lastTransitionTime":"2026-01-20T03:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.697507 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:05:18.884029767 +0000 UTC Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.720932 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:22 crc kubenswrapper[4898]: E0120 03:50:22.721152 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.721467 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:22 crc kubenswrapper[4898]: E0120 03:50:22.721570 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.721723 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:22 crc kubenswrapper[4898]: E0120 03:50:22.721933 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.763610 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.763652 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.763663 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.763680 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.763694 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:22Z","lastTransitionTime":"2026-01-20T03:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.866377 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.866446 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.866460 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.866482 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.866500 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:22Z","lastTransitionTime":"2026-01-20T03:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.968661 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.968702 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.968711 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.968725 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:22 crc kubenswrapper[4898]: I0120 03:50:22.968733 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:22Z","lastTransitionTime":"2026-01-20T03:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.071193 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.071229 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.071240 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.071252 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.071261 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:23Z","lastTransitionTime":"2026-01-20T03:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.173791 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.173837 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.173849 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.173867 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.173881 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:23Z","lastTransitionTime":"2026-01-20T03:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.276717 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.276784 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.276803 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.276834 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.276852 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:23Z","lastTransitionTime":"2026-01-20T03:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.380390 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.380520 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.380542 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.380589 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.380613 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:23Z","lastTransitionTime":"2026-01-20T03:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.484548 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.484654 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.484680 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.484715 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.484738 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:23Z","lastTransitionTime":"2026-01-20T03:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.587156 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.587201 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.587210 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.587226 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.587236 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:23Z","lastTransitionTime":"2026-01-20T03:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.690027 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.690097 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.690108 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.690132 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.690144 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:23Z","lastTransitionTime":"2026-01-20T03:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.698283 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:43:54.875697697 +0000 UTC Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.721023 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:23 crc kubenswrapper[4898]: E0120 03:50:23.721194 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.721748 4898 scope.go:117] "RemoveContainer" containerID="0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd" Jan 20 03:50:23 crc kubenswrapper[4898]: E0120 03:50:23.722080 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.738349 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.750850 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.766140 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.780548 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.792964 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.792988 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.792998 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.793012 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.793022 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:23Z","lastTransitionTime":"2026-01-20T03:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.795528 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.808700 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.820412 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.845896 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:09Z\\\",\\\"message\\\":\\\"snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:09.806284 6594 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806379 6594 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806557 6594 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 03:50:09.806631 6594 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.807042 6594 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 03:50:09.807055 6594 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 03:50:09.807079 6594 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 03:50:09.807084 6594 factory.go:656] Stopping watch factory\\\\nI0120 03:50:09.807104 6594 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.857493 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.871114 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.886904 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.896268 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.896318 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.896330 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.896351 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.896364 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:23Z","lastTransitionTime":"2026-01-20T03:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.901522 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.924311 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.941366 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.955732 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.968511 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.979417 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.991284 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:23Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.999113 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.999156 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.999169 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.999193 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:23 crc kubenswrapper[4898]: I0120 03:50:23.999206 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:23Z","lastTransitionTime":"2026-01-20T03:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.063385 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:24 crc kubenswrapper[4898]: E0120 03:50:24.063572 4898 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:50:24 crc kubenswrapper[4898]: E0120 03:50:24.063645 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs podName:e93f051c-f83c-4d27-a695-dd5a33e979f4 nodeName:}" failed. No retries permitted until 2026-01-20 03:50:56.063626521 +0000 UTC m=+102.663414380 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs") pod "network-metrics-daemon-5hkf9" (UID: "e93f051c-f83c-4d27-a695-dd5a33e979f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.102267 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.102302 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.102310 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.102326 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.102336 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:24Z","lastTransitionTime":"2026-01-20T03:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.204281 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.204329 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.204346 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.204369 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.204387 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:24Z","lastTransitionTime":"2026-01-20T03:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.231404 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-897rl_1288aab6-09fa-40a3-8ff8-e00002a32d61/kube-multus/0.log" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.231531 4898 generic.go:334] "Generic (PLEG): container finished" podID="1288aab6-09fa-40a3-8ff8-e00002a32d61" containerID="61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7" exitCode=1 Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.231598 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-897rl" event={"ID":"1288aab6-09fa-40a3-8ff8-e00002a32d61","Type":"ContainerDied","Data":"61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7"} Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.232391 4898 scope.go:117] "RemoveContainer" containerID="61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.257703 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.270319 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.285448 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.299291 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:24Z\\\",\\\"message\\\":\\\"2026-01-20T03:49:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e\\\\n2026-01-20T03:49:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e to /host/opt/cni/bin/\\\\n2026-01-20T03:49:39Z [verbose] multus-daemon started\\\\n2026-01-20T03:49:39Z [verbose] Readiness Indicator file check\\\\n2026-01-20T03:50:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.311778 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.312181 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.312199 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.312228 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.312248 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:24Z","lastTransitionTime":"2026-01-20T03:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.313444 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.330594 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.345247 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.360576 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.374023 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.386877 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.398921 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.412621 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.415254 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.415278 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.415286 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.415301 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.415310 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:24Z","lastTransitionTime":"2026-01-20T03:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.423995 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.443170 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:09Z\\\",\\\"message\\\":\\\"snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:09.806284 6594 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806379 6594 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806557 6594 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 03:50:09.806631 6594 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.807042 6594 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 03:50:09.807055 6594 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 03:50:09.807079 6594 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 03:50:09.807084 6594 factory.go:656] Stopping watch factory\\\\nI0120 03:50:09.807104 6594 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.455861 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.470193 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.486946 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.506525 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:24Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.517731 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.517793 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.517810 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.517836 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.517855 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:24Z","lastTransitionTime":"2026-01-20T03:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.621115 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.621174 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.621192 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.621218 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.621235 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:24Z","lastTransitionTime":"2026-01-20T03:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.698843 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:01:15.660000207 +0000 UTC Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.721274 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.721324 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.721399 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:24 crc kubenswrapper[4898]: E0120 03:50:24.721479 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:24 crc kubenswrapper[4898]: E0120 03:50:24.721623 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:24 crc kubenswrapper[4898]: E0120 03:50:24.721741 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.724042 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.724083 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.724103 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.724125 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.724143 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:24Z","lastTransitionTime":"2026-01-20T03:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.826406 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.826497 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.826517 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.826539 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.826556 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:24Z","lastTransitionTime":"2026-01-20T03:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.929834 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.929893 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.929917 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.929948 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:24 crc kubenswrapper[4898]: I0120 03:50:24.929969 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:24Z","lastTransitionTime":"2026-01-20T03:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.033056 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.033109 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.033126 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.033148 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.033165 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:25Z","lastTransitionTime":"2026-01-20T03:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.136545 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.136610 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.136631 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.136656 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.136675 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:25Z","lastTransitionTime":"2026-01-20T03:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.237759 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-897rl_1288aab6-09fa-40a3-8ff8-e00002a32d61/kube-multus/0.log" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.237837 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-897rl" event={"ID":"1288aab6-09fa-40a3-8ff8-e00002a32d61","Type":"ContainerStarted","Data":"a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b"} Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.238522 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.238578 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.238596 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.238619 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.238636 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:25Z","lastTransitionTime":"2026-01-20T03:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.253829 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.264614 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.285310 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.298498 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.331370 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:09Z\\\",\\\"message\\\":\\\"snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:09.806284 6594 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806379 6594 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806557 6594 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 03:50:09.806631 6594 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.807042 6594 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 03:50:09.807055 6594 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 03:50:09.807079 6594 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 03:50:09.807084 6594 factory.go:656] Stopping watch factory\\\\nI0120 03:50:09.807104 6594 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.341003 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.341050 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.341070 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.341098 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.341118 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:25Z","lastTransitionTime":"2026-01-20T03:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.343798 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.356816 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.374184 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.388239 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.420522 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.437761 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.444579 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.444631 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.444654 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.444678 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.444696 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:25Z","lastTransitionTime":"2026-01-20T03:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.467472 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.490318 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:24Z\\\",\\\"message\\\":\\\"2026-01-20T03:49:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e\\\\n2026-01-20T03:49:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e to /host/opt/cni/bin/\\\\n2026-01-20T03:49:39Z [verbose] multus-daemon started\\\\n2026-01-20T03:49:39Z [verbose] Readiness Indicator file check\\\\n2026-01-20T03:50:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.502805 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.526407 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.542927 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.547423 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.547531 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.547551 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.547576 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.547596 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:25Z","lastTransitionTime":"2026-01-20T03:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.563992 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.585646 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.651555 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.651599 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.651612 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.651634 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.651647 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:25Z","lastTransitionTime":"2026-01-20T03:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.699865 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:26:47.391760855 +0000 UTC Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.721350 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:25 crc kubenswrapper[4898]: E0120 03:50:25.721586 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.754046 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.754085 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.754116 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.754133 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.754144 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:25Z","lastTransitionTime":"2026-01-20T03:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.857020 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.857082 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.857102 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.857125 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.857139 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:25Z","lastTransitionTime":"2026-01-20T03:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.959934 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.959974 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.959983 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.960001 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:25 crc kubenswrapper[4898]: I0120 03:50:25.960014 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:25Z","lastTransitionTime":"2026-01-20T03:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.063249 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.063302 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.063319 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.063342 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.063358 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.166287 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.166329 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.166340 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.166358 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.166370 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.269145 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.269184 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.269194 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.269210 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.269222 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.371424 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.371488 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.371497 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.371515 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.371526 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.473784 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.473875 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.473894 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.473920 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.473938 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.548703 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.548730 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.548740 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.548771 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.548780 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: E0120 03:50:26.561073 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.565569 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.565597 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.565606 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.565618 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.565629 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: E0120 03:50:26.576801 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.580399 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.580448 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.580458 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.580468 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.580476 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: E0120 03:50:26.595347 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.598403 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.598424 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.598445 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.598456 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.598464 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: E0120 03:50:26.610532 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.614025 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.614069 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.614082 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.614101 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.614113 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: E0120 03:50:26.625542 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:26 crc kubenswrapper[4898]: E0120 03:50:26.625675 4898 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.626991 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.627020 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.627029 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.627042 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.627049 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.700244 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:09:00.183086711 +0000 UTC Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.720660 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.720734 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.720804 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:26 crc kubenswrapper[4898]: E0120 03:50:26.721001 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:26 crc kubenswrapper[4898]: E0120 03:50:26.721147 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:26 crc kubenswrapper[4898]: E0120 03:50:26.721239 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.729393 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.729467 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.729486 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.729509 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.729529 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.832266 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.832337 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.832360 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.832385 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.832403 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.934833 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.934881 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.934894 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.934913 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:26 crc kubenswrapper[4898]: I0120 03:50:26.934926 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:26Z","lastTransitionTime":"2026-01-20T03:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.036994 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.037050 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.037066 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.037090 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.037108 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:27Z","lastTransitionTime":"2026-01-20T03:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.139887 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.139932 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.139953 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.139974 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.139990 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:27Z","lastTransitionTime":"2026-01-20T03:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.243268 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.243328 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.243345 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.243373 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.243390 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:27Z","lastTransitionTime":"2026-01-20T03:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.346262 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.346332 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.346382 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.346409 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.346464 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:27Z","lastTransitionTime":"2026-01-20T03:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.454657 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.454731 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.454825 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.454870 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.454890 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:27Z","lastTransitionTime":"2026-01-20T03:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.558590 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.558670 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.558686 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.558711 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.558730 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:27Z","lastTransitionTime":"2026-01-20T03:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.661973 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.662030 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.662050 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.662075 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.662093 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:27Z","lastTransitionTime":"2026-01-20T03:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.700358 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:54:33.840312105 +0000 UTC Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.723001 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:27 crc kubenswrapper[4898]: E0120 03:50:27.723142 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.764840 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.764912 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.764927 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.764945 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.764959 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:27Z","lastTransitionTime":"2026-01-20T03:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.867916 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.867998 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.868015 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.868041 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.868059 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:27Z","lastTransitionTime":"2026-01-20T03:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.971531 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.971624 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.971648 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.971688 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:27 crc kubenswrapper[4898]: I0120 03:50:27.971714 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:27Z","lastTransitionTime":"2026-01-20T03:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.075612 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.075693 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.075711 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.075742 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.075761 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:28Z","lastTransitionTime":"2026-01-20T03:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.180284 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.180372 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.180390 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.180455 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.180481 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:28Z","lastTransitionTime":"2026-01-20T03:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.284165 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.284228 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.284248 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.284273 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.284291 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:28Z","lastTransitionTime":"2026-01-20T03:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.388589 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.388657 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.388675 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.388701 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.388718 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:28Z","lastTransitionTime":"2026-01-20T03:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.491843 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.491936 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.491961 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.491995 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.492024 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:28Z","lastTransitionTime":"2026-01-20T03:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.594238 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.594279 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.594289 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.594343 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.594354 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:28Z","lastTransitionTime":"2026-01-20T03:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.697657 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.697733 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.697749 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.697775 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.697793 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:28Z","lastTransitionTime":"2026-01-20T03:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.701168 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:23:03.126089026 +0000 UTC Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.720859 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.720863 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:28 crc kubenswrapper[4898]: E0120 03:50:28.721041 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.720863 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:28 crc kubenswrapper[4898]: E0120 03:50:28.721204 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:28 crc kubenswrapper[4898]: E0120 03:50:28.721344 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.800480 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.800555 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.800572 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.800600 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.800622 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:28Z","lastTransitionTime":"2026-01-20T03:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.905055 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.905127 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.905146 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.905179 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:28 crc kubenswrapper[4898]: I0120 03:50:28.905200 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:28Z","lastTransitionTime":"2026-01-20T03:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.008828 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.008919 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.008940 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.008969 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.008989 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:29Z","lastTransitionTime":"2026-01-20T03:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.111576 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.111649 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.111667 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.111699 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.111718 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:29Z","lastTransitionTime":"2026-01-20T03:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.215213 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.215595 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.215767 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.215900 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.216039 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:29Z","lastTransitionTime":"2026-01-20T03:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.320114 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.320198 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.320218 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.320248 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.320270 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:29Z","lastTransitionTime":"2026-01-20T03:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.424076 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.424168 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.424191 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.424233 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.424261 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:29Z","lastTransitionTime":"2026-01-20T03:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.528321 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.528383 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.528403 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.528470 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.528513 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:29Z","lastTransitionTime":"2026-01-20T03:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.632009 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.632083 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.632101 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.632129 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.632149 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:29Z","lastTransitionTime":"2026-01-20T03:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.702221 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:00:19.444075754 +0000 UTC Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.721184 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:29 crc kubenswrapper[4898]: E0120 03:50:29.721397 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.735750 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.735810 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.735832 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.735866 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.735890 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:29Z","lastTransitionTime":"2026-01-20T03:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.838243 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.838280 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.838289 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.838306 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.838317 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:29Z","lastTransitionTime":"2026-01-20T03:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.942480 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.942900 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.943169 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.943358 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:29 crc kubenswrapper[4898]: I0120 03:50:29.943556 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:29Z","lastTransitionTime":"2026-01-20T03:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.046778 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.047343 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.047517 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.047704 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.047831 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:30Z","lastTransitionTime":"2026-01-20T03:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.151282 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.151719 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.151865 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.152014 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.152199 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:30Z","lastTransitionTime":"2026-01-20T03:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.255681 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.255774 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.255798 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.255832 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.255854 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:30Z","lastTransitionTime":"2026-01-20T03:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.359015 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.359102 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.359129 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.359162 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.359187 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:30Z","lastTransitionTime":"2026-01-20T03:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.461739 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.461800 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.461817 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.461840 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.461859 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:30Z","lastTransitionTime":"2026-01-20T03:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.564758 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.564837 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.564864 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.564905 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.564934 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:30Z","lastTransitionTime":"2026-01-20T03:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.667924 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.667987 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.668008 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.668037 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.668060 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:30Z","lastTransitionTime":"2026-01-20T03:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.702624 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:32:34.798552921 +0000 UTC Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.721075 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.721111 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.721134 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:30 crc kubenswrapper[4898]: E0120 03:50:30.721269 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:30 crc kubenswrapper[4898]: E0120 03:50:30.721545 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:30 crc kubenswrapper[4898]: E0120 03:50:30.721603 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.771037 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.771096 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.771117 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.771144 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.771167 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:30Z","lastTransitionTime":"2026-01-20T03:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.874366 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.874457 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.874476 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.874498 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.874514 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:30Z","lastTransitionTime":"2026-01-20T03:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.977987 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.978034 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.978049 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.978076 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:30 crc kubenswrapper[4898]: I0120 03:50:30.978094 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:30Z","lastTransitionTime":"2026-01-20T03:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.081242 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.081301 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.081314 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.081336 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.081347 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:31Z","lastTransitionTime":"2026-01-20T03:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.184411 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.184512 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.184534 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.184561 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.184588 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:31Z","lastTransitionTime":"2026-01-20T03:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.287718 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.287813 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.287843 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.287881 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.287906 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:31Z","lastTransitionTime":"2026-01-20T03:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.392564 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.392633 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.392651 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.392677 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.392699 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:31Z","lastTransitionTime":"2026-01-20T03:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.496564 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.496654 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.496679 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.496714 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.496738 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:31Z","lastTransitionTime":"2026-01-20T03:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.600786 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.601189 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.601364 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.601583 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.601762 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:31Z","lastTransitionTime":"2026-01-20T03:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.702965 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:05:03.369639088 +0000 UTC Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.704574 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.704819 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.705078 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.705283 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.705469 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:31Z","lastTransitionTime":"2026-01-20T03:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.720830 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:31 crc kubenswrapper[4898]: E0120 03:50:31.721181 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.739414 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.809030 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.809121 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.809172 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.809199 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.809216 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:31Z","lastTransitionTime":"2026-01-20T03:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.912565 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.912635 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.912655 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.912682 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:31 crc kubenswrapper[4898]: I0120 03:50:31.912701 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:31Z","lastTransitionTime":"2026-01-20T03:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.015808 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.015877 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.015895 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.015919 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.015937 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:32Z","lastTransitionTime":"2026-01-20T03:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.119484 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.119738 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.119763 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.119791 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.119808 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:32Z","lastTransitionTime":"2026-01-20T03:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.222582 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.222969 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.223128 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.223270 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.223403 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:32Z","lastTransitionTime":"2026-01-20T03:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.327630 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.327689 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.327711 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.327804 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.327825 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:32Z","lastTransitionTime":"2026-01-20T03:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.431414 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.431498 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.431522 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.431550 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.431568 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:32Z","lastTransitionTime":"2026-01-20T03:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.534745 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.534976 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.535079 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.535136 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.535200 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:32Z","lastTransitionTime":"2026-01-20T03:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.639729 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.639788 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.639806 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.639831 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.639849 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:32Z","lastTransitionTime":"2026-01-20T03:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.703122 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:05:46.430028072 +0000 UTC Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.720928 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.720947 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.721054 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:32 crc kubenswrapper[4898]: E0120 03:50:32.721215 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:32 crc kubenswrapper[4898]: E0120 03:50:32.721315 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:32 crc kubenswrapper[4898]: E0120 03:50:32.721537 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.742955 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.742998 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.743015 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.743039 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.743056 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:32Z","lastTransitionTime":"2026-01-20T03:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.846143 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.846208 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.846226 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.846252 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.846270 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:32Z","lastTransitionTime":"2026-01-20T03:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.949378 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.949565 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.949585 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.949611 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:32 crc kubenswrapper[4898]: I0120 03:50:32.949629 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:32Z","lastTransitionTime":"2026-01-20T03:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.054134 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.054226 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.054251 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.054285 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.054315 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:33Z","lastTransitionTime":"2026-01-20T03:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.158943 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.159006 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.159024 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.159051 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.159069 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:33Z","lastTransitionTime":"2026-01-20T03:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.262591 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.262641 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.262653 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.262672 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.262688 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:33Z","lastTransitionTime":"2026-01-20T03:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.365444 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.365489 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.365500 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.365514 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.365525 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:33Z","lastTransitionTime":"2026-01-20T03:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.468344 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.468407 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.468471 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.468503 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.468522 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:33Z","lastTransitionTime":"2026-01-20T03:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.571454 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.571506 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.571517 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.571546 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.571562 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:33Z","lastTransitionTime":"2026-01-20T03:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.674838 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.674917 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.674934 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.674963 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.674980 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:33Z","lastTransitionTime":"2026-01-20T03:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.703715 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:26:29.22755218 +0000 UTC Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.721354 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:33 crc kubenswrapper[4898]: E0120 03:50:33.722828 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.744796 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.767609 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.780547 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.780608 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.780625 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.780653 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.780671 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:33Z","lastTransitionTime":"2026-01-20T03:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.789209 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.811136 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.845513 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:09Z\\\",\\\"message\\\":\\\"snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:09.806284 6594 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806379 6594 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806557 6594 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 03:50:09.806631 6594 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.807042 6594 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 03:50:09.807055 6594 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 03:50:09.807079 6594 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 03:50:09.807084 6594 factory.go:656] Stopping watch factory\\\\nI0120 03:50:09.807104 6594 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.864277 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.882250 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.884613 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.884713 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.884734 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.884765 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.884789 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:33Z","lastTransitionTime":"2026-01-20T03:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.901845 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c16dcd-5603-4f30-84d8-d0ad40124ac8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4357a76e77495fe84cc44f53161be1e9a51d2221f1b0b277caf9ace2c2d7d418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.928215 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.946268 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.974827 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.988423 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.988517 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.988539 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.988571 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.988595 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:33Z","lastTransitionTime":"2026-01-20T03:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:33 crc kubenswrapper[4898]: I0120 03:50:33.993194 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.013338 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.030826 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:24Z\\\",\\\"message\\\":\\\"2026-01-20T03:49:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e\\\\n2026-01-20T03:49:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e to /host/opt/cni/bin/\\\\n2026-01-20T03:49:39Z [verbose] multus-daemon started\\\\n2026-01-20T03:49:39Z [verbose] Readiness Indicator file check\\\\n2026-01-20T03:50:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.046952 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.074158 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.092917 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.093021 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.093074 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.093093 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.093122 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.093140 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:34Z","lastTransitionTime":"2026-01-20T03:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.112829 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.133698 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.196946 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.197019 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.197041 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.197070 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.197089 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:34Z","lastTransitionTime":"2026-01-20T03:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.300026 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.300104 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.300123 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.300155 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.300178 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:34Z","lastTransitionTime":"2026-01-20T03:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.404074 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.404136 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.404162 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.404190 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.404209 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:34Z","lastTransitionTime":"2026-01-20T03:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.506880 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.506959 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.506985 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.507022 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.507047 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:34Z","lastTransitionTime":"2026-01-20T03:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.610669 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.610763 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.610788 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.610816 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.610841 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:34Z","lastTransitionTime":"2026-01-20T03:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.704077 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:33:45.080954717 +0000 UTC Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.714204 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.714283 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.714303 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.714332 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.714351 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:34Z","lastTransitionTime":"2026-01-20T03:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.720616 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.720706 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.720628 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:34 crc kubenswrapper[4898]: E0120 03:50:34.720785 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:34 crc kubenswrapper[4898]: E0120 03:50:34.721023 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:34 crc kubenswrapper[4898]: E0120 03:50:34.721094 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.818114 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.818198 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.818219 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.818254 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.818277 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:34Z","lastTransitionTime":"2026-01-20T03:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.922271 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.922473 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.922503 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.922539 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:34 crc kubenswrapper[4898]: I0120 03:50:34.922566 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:34Z","lastTransitionTime":"2026-01-20T03:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.026563 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.026635 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.026653 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.026680 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.026698 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:35Z","lastTransitionTime":"2026-01-20T03:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.130413 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.130502 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.130521 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.130555 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.130577 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:35Z","lastTransitionTime":"2026-01-20T03:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.234917 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.235515 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.235671 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.235840 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.235995 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:35Z","lastTransitionTime":"2026-01-20T03:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.340510 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.340581 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.340608 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.340644 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.340670 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:35Z","lastTransitionTime":"2026-01-20T03:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.444458 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.444536 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.444556 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.444587 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.444605 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:35Z","lastTransitionTime":"2026-01-20T03:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.549055 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.549140 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.549158 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.549187 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.549206 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:35Z","lastTransitionTime":"2026-01-20T03:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.653307 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.653379 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.653392 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.653415 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.653450 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:35Z","lastTransitionTime":"2026-01-20T03:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.705204 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:09:35.114914942 +0000 UTC Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.720900 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:35 crc kubenswrapper[4898]: E0120 03:50:35.721125 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.722829 4898 scope.go:117] "RemoveContainer" containerID="0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.756212 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.756279 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.756298 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.756325 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.756347 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:35Z","lastTransitionTime":"2026-01-20T03:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.860228 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.860335 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.860363 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.860394 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.860415 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:35Z","lastTransitionTime":"2026-01-20T03:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.964767 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.964817 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.964827 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.964847 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:35 crc kubenswrapper[4898]: I0120 03:50:35.964860 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:35Z","lastTransitionTime":"2026-01-20T03:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.068320 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.068399 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.068417 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.068480 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.068503 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.172063 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.172472 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.172631 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.172812 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.172983 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.275879 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.276122 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.276257 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.276486 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.276622 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.283118 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/2.log" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.379381 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.379410 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.379422 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.379459 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.379473 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.483672 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.483734 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.483752 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.483776 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.483792 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.587198 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.587312 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.587330 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.587371 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.587395 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.622636 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.622849 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.623004 4898 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.623075 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:40.623055071 +0000 UTC m=+147.222842930 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.623359 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:40.623322099 +0000 UTC m=+147.223109968 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.690707 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.690780 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.690800 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.690829 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.690847 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.705885 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:17:12.141961588 +0000 UTC Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.721274 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.721391 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.721477 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.721288 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.721616 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.721943 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.723917 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.723994 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.724028 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.724171 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.724202 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.724206 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.724223 4898 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.724229 4898 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.724240 4898 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.724300 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:40.724277174 +0000 UTC m=+147.324065044 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.724327 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:40.724317616 +0000 UTC m=+147.324105485 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.724580 4898 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.724713 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:40.724693047 +0000 UTC m=+147.324481116 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.728905 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.728940 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.728954 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.728973 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.728985 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.746779 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.753850 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.753910 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.753929 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.753960 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.753981 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.773045 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.777226 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.777289 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.777309 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.777338 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.777359 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.795460 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.800871 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.800927 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.800954 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.800997 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.801015 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.817896 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.823198 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.823248 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.823262 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.823288 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.823307 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.840954 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:36 crc kubenswrapper[4898]: E0120 03:50:36.841088 4898 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.843938 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.844012 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.844034 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.844064 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.844083 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.947880 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.947963 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.947983 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.948015 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:36 crc kubenswrapper[4898]: I0120 03:50:36.948038 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:36Z","lastTransitionTime":"2026-01-20T03:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.051141 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.051208 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.051227 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.051255 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.051276 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:37Z","lastTransitionTime":"2026-01-20T03:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.155581 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.155661 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.155681 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.155718 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.155743 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:37Z","lastTransitionTime":"2026-01-20T03:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.259422 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.259528 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.259548 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.259577 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.259602 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:37Z","lastTransitionTime":"2026-01-20T03:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.295625 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/2.log" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.299893 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e"} Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.300823 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.325218 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.352372 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.363230 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.363466 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.364026 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.364148 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.364288 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:37Z","lastTransitionTime":"2026-01-20T03:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.374863 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.409287 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:09Z\\\",\\\"message\\\":\\\"snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:09.806284 6594 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806379 6594 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806557 6594 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 03:50:09.806631 6594 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.807042 6594 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 03:50:09.807055 6594 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 03:50:09.807079 6594 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 03:50:09.807084 6594 factory.go:656] Stopping watch factory\\\\nI0120 03:50:09.807104 6594 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.428990 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.452544 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.467939 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.468376 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.468719 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.468899 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.469131 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:37Z","lastTransitionTime":"2026-01-20T03:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.470618 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c16dcd-5603-4f30-84d8-d0ad40124ac8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4357a76e77495fe84cc44f53161be1e9a51d2221f1b0b277caf9ace2c2d7d418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.493674 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.517052 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.538479 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.557397 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.573170 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.573389 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.573557 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.573699 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.573819 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:37Z","lastTransitionTime":"2026-01-20T03:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.589795 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.612187 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.630323 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.656393 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.677708 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.677770 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.677789 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.677818 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.677838 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:37Z","lastTransitionTime":"2026-01-20T03:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.682678 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.707095 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:35:12.393083144 +0000 UTC Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.717081 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.721361 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:37 crc kubenswrapper[4898]: E0120 03:50:37.721622 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.760274 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:24Z\\\",\\\"message\\\":\\\"2026-01-20T03:49:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e\\\\n2026-01-20T03:49:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e to /host/opt/cni/bin/\\\\n2026-01-20T03:49:39Z [verbose] multus-daemon started\\\\n2026-01-20T03:49:39Z [verbose] Readiness Indicator file check\\\\n2026-01-20T03:50:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.781073 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.781131 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.781151 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.781171 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.781186 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:37Z","lastTransitionTime":"2026-01-20T03:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.785833 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.885209 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.885273 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.885286 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.885311 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.885327 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:37Z","lastTransitionTime":"2026-01-20T03:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.988539 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.988648 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.988676 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.988898 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:37 crc kubenswrapper[4898]: I0120 03:50:37.988921 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:37Z","lastTransitionTime":"2026-01-20T03:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.092043 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.092113 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.092135 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.092224 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.092245 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:38Z","lastTransitionTime":"2026-01-20T03:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.195884 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.195977 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.195998 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.196029 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.196048 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:38Z","lastTransitionTime":"2026-01-20T03:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.300105 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.300929 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.301022 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.301061 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.301080 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:38Z","lastTransitionTime":"2026-01-20T03:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.307791 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/3.log" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.308854 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/2.log" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.313370 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" exitCode=1 Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.313474 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e"} Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.313551 4898 scope.go:117] "RemoveContainer" containerID="0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.314852 4898 scope.go:117] "RemoveContainer" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:50:38 crc kubenswrapper[4898]: E0120 03:50:38.315135 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.344208 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.368576 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.389230 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.409203 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.409271 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.409292 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.409323 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.409343 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:38Z","lastTransitionTime":"2026-01-20T03:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.422007 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0300bc6165cbd6fcd29e1ffd94456aecd18c01d61916aa9016c4295bdc46bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:09Z\\\",\\\"message\\\":\\\"snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:09.806284 6594 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806379 6594 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.806557 6594 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 03:50:09.806631 6594 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:09.807042 6594 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 03:50:09.807055 6594 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 03:50:09.807079 6594 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 03:50:09.807084 6594 factory.go:656] Stopping watch factory\\\\nI0120 03:50:09.807104 6594 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:37Z\\\",\\\"message\\\":\\\"cyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:37.151566 6981 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:37.152039 6981 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 03:50:37.152113 6981 factory.go:656] Stopping watch factory\\\\nI0120 03:50:37.152128 6981 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 03:50:37.152667 6981 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:50:37.152817 6981 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:50:37.152828 6981 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:50:37.152840 6981 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:37.153354 6981 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.441975 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.463682 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.485120 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c16dcd-5603-4f30-84d8-d0ad40124ac8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4357a76e77495fe84cc44f53161be1e9a51d2221f1b0b277caf9ace2c2d7d418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.505173 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.512906 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.512968 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.512987 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.513016 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.513041 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:38Z","lastTransitionTime":"2026-01-20T03:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.525590 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.566471 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.586130 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.613771 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.618131 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.618218 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.618272 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.618299 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.618316 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:38Z","lastTransitionTime":"2026-01-20T03:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.639327 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:24Z\\\",\\\"message\\\":\\\"2026-01-20T03:49:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e\\\\n2026-01-20T03:49:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e to /host/opt/cni/bin/\\\\n2026-01-20T03:49:39Z [verbose] multus-daemon started\\\\n2026-01-20T03:49:39Z [verbose] Readiness Indicator file check\\\\n2026-01-20T03:50:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.659920 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.688650 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.707875 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:47:10.489193156 +0000 UTC Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.711810 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.720368 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.720455 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.720512 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:38 crc kubenswrapper[4898]: E0120 03:50:38.720606 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:38 crc kubenswrapper[4898]: E0120 03:50:38.720788 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:38 crc kubenswrapper[4898]: E0120 03:50:38.720969 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.721983 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.722016 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.722027 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.722043 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.722058 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:38Z","lastTransitionTime":"2026-01-20T03:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.732054 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.753531 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.770702 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.825484 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.825981 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.826112 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.826246 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.826402 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:38Z","lastTransitionTime":"2026-01-20T03:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.930265 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.930346 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.930365 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.930396 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:38 crc kubenswrapper[4898]: I0120 03:50:38.930416 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:38Z","lastTransitionTime":"2026-01-20T03:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.034295 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.035002 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.035183 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.035347 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.035552 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:39Z","lastTransitionTime":"2026-01-20T03:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.139413 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.139528 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.139550 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.139584 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.139604 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:39Z","lastTransitionTime":"2026-01-20T03:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.243830 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.243901 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.243957 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.243985 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.244006 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:39Z","lastTransitionTime":"2026-01-20T03:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.321153 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/3.log" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.327030 4898 scope.go:117] "RemoveContainer" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:50:39 crc kubenswrapper[4898]: E0120 03:50:39.327389 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.346425 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.349341 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.349420 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.349482 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.349516 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.349544 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:39Z","lastTransitionTime":"2026-01-20T03:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.367826 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.385869 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c16dcd-5603-4f30-84d8-d0ad40124ac8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4357a76e77495fe84cc44f53161be1e9a51d2221f1b0b277caf9ace2c2d7d418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.407713 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.431074 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.453004 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.453095 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.453115 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.453148 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.453171 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:39Z","lastTransitionTime":"2026-01-20T03:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.453426 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.473091 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.508210 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:37Z\\\",\\\"message\\\":\\\"cyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:37.151566 6981 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:37.152039 6981 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 03:50:37.152113 6981 factory.go:656] Stopping watch factory\\\\nI0120 03:50:37.152128 6981 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 03:50:37.152667 6981 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:50:37.152817 6981 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:50:37.152828 6981 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:50:37.152840 6981 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:37.153354 6981 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.543692 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.556962 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.557020 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.557037 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.557067 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.557088 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:39Z","lastTransitionTime":"2026-01-20T03:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.566894 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.583936 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.607919 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.631088 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.657229 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.661157 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.661232 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.661251 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.661281 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.661299 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:39Z","lastTransitionTime":"2026-01-20T03:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.680021 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:24Z\\\",\\\"message\\\":\\\"2026-01-20T03:49:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e\\\\n2026-01-20T03:49:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e to /host/opt/cni/bin/\\\\n2026-01-20T03:49:39Z [verbose] multus-daemon started\\\\n2026-01-20T03:49:39Z [verbose] Readiness Indicator file check\\\\n2026-01-20T03:50:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.698709 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.708966 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:52:40.240663119 +0000 UTC Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.718765 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.721238 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:39 crc kubenswrapper[4898]: E0120 03:50:39.721468 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.740840 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.764069 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.773918 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.773988 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.774010 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.774038 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.774058 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:39Z","lastTransitionTime":"2026-01-20T03:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.877974 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.878075 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.878100 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.878135 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.878160 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:39Z","lastTransitionTime":"2026-01-20T03:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.981596 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.981673 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.981694 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.981721 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:39 crc kubenswrapper[4898]: I0120 03:50:39.981740 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:39Z","lastTransitionTime":"2026-01-20T03:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.085660 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.085741 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.085770 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.085807 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.085836 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:40Z","lastTransitionTime":"2026-01-20T03:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.188589 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.188648 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.188673 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.188706 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.188727 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:40Z","lastTransitionTime":"2026-01-20T03:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.292052 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.292110 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.292134 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.292168 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.292194 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:40Z","lastTransitionTime":"2026-01-20T03:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.395576 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.395635 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.395657 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.395690 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.395715 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:40Z","lastTransitionTime":"2026-01-20T03:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.499303 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.499389 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.499416 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.499500 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.499534 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:40Z","lastTransitionTime":"2026-01-20T03:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.603412 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.603682 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.603769 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.603860 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.603940 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:40Z","lastTransitionTime":"2026-01-20T03:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.707615 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.707974 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.708138 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.708299 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.708508 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:40Z","lastTransitionTime":"2026-01-20T03:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.709814 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:41:21.573940246 +0000 UTC Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.720457 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.720571 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.720928 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:40 crc kubenswrapper[4898]: E0120 03:50:40.721053 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:40 crc kubenswrapper[4898]: E0120 03:50:40.721183 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:40 crc kubenswrapper[4898]: E0120 03:50:40.721279 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.812062 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.812113 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.812130 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.812154 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.812173 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:40Z","lastTransitionTime":"2026-01-20T03:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.916688 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.916764 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.916785 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.916814 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:40 crc kubenswrapper[4898]: I0120 03:50:40.916834 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:40Z","lastTransitionTime":"2026-01-20T03:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.020185 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.020252 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.020270 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.020300 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.020320 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:41Z","lastTransitionTime":"2026-01-20T03:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.124229 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.124301 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.124320 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.124346 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.124365 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:41Z","lastTransitionTime":"2026-01-20T03:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.228354 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.228471 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.228496 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.228533 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.228556 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:41Z","lastTransitionTime":"2026-01-20T03:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.332210 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.332281 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.332300 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.332328 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.332345 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:41Z","lastTransitionTime":"2026-01-20T03:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.436299 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.436372 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.436394 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.436424 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.436482 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:41Z","lastTransitionTime":"2026-01-20T03:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.539368 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.539420 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.539464 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.539490 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.539506 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:41Z","lastTransitionTime":"2026-01-20T03:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.642254 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.642358 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.642381 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.642490 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.642513 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:41Z","lastTransitionTime":"2026-01-20T03:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.710728 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:04:38.369512211 +0000 UTC Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.721162 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:41 crc kubenswrapper[4898]: E0120 03:50:41.721511 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.745653 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.745711 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.745729 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.745754 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.745779 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:41Z","lastTransitionTime":"2026-01-20T03:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.848517 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.848581 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.848600 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.848629 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.848650 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:41Z","lastTransitionTime":"2026-01-20T03:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.951846 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.951907 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.951926 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.951952 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:41 crc kubenswrapper[4898]: I0120 03:50:41.951971 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:41Z","lastTransitionTime":"2026-01-20T03:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.055862 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.055939 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.055959 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.055989 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.056013 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:42Z","lastTransitionTime":"2026-01-20T03:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.159223 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.159367 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.159398 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.159469 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.159494 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:42Z","lastTransitionTime":"2026-01-20T03:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.262020 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.262101 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.262122 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.262151 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.262188 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:42Z","lastTransitionTime":"2026-01-20T03:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.366021 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.366125 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.366145 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.366171 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.366191 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:42Z","lastTransitionTime":"2026-01-20T03:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.468757 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.468827 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.468845 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.468880 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.468901 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:42Z","lastTransitionTime":"2026-01-20T03:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.572075 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.572155 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.572174 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.572207 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.572230 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:42Z","lastTransitionTime":"2026-01-20T03:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.675584 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.675721 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.675739 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.675765 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.675784 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:42Z","lastTransitionTime":"2026-01-20T03:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.711359 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:53:00.336390864 +0000 UTC Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.720837 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:42 crc kubenswrapper[4898]: E0120 03:50:42.721033 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.721423 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.721547 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:42 crc kubenswrapper[4898]: E0120 03:50:42.721634 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:42 crc kubenswrapper[4898]: E0120 03:50:42.721835 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.779647 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.779729 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.779756 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.779791 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.779814 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:42Z","lastTransitionTime":"2026-01-20T03:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.883099 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.883172 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.883188 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.883216 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.883234 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:42Z","lastTransitionTime":"2026-01-20T03:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.987598 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.987663 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.987685 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.987713 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:42 crc kubenswrapper[4898]: I0120 03:50:42.987732 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:42Z","lastTransitionTime":"2026-01-20T03:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.091269 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.091377 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.091395 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.091419 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.091458 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:43Z","lastTransitionTime":"2026-01-20T03:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.195389 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.195495 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.195515 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.195541 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.195558 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:43Z","lastTransitionTime":"2026-01-20T03:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.298500 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.298560 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.298577 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.298604 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.298624 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:43Z","lastTransitionTime":"2026-01-20T03:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.402992 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.403539 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.403750 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.403985 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.404139 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:43Z","lastTransitionTime":"2026-01-20T03:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.508063 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.508159 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.508183 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.508226 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.508254 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:43Z","lastTransitionTime":"2026-01-20T03:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.612546 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.613041 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.613245 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.613519 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.613756 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:43Z","lastTransitionTime":"2026-01-20T03:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.712013 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:07:23.76589982 +0000 UTC Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.717118 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.717322 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.717488 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.717683 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.717835 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:43Z","lastTransitionTime":"2026-01-20T03:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.720651 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:43 crc kubenswrapper[4898]: E0120 03:50:43.722716 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.747399 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4971570e-0291-414d-8c26-d8e99cf4e978\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0120 03:49:31.657955 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 03:49:31.658187 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 03:49:31.660372 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1853337190/tls.crt::/tmp/serving-cert-1853337190/tls.key\\\\\\\"\\\\nI0120 03:49:31.965133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 03:49:31.967083 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 03:49:31.967125 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 03:49:31.967156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 03:49:31.967162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 03:49:31.971865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 03:49:31.971884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 03:49:31.971892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 03:49:31.971895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 03:49:31.971898 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 03:49:31.971900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 03:49:31.971926 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 03:49:31.973929 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.772647 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5afcd8b13ae771bcf605f80540aa692d1ea222c1a0330f6787bb223036428e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.800895 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9485c68-c108-4b2d-9278-97f57ed65716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfe36c6a775803e0691cde5ec713a65e548b6b8a75ab36a4979752d02f18a36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f666b8921133b5319d06768b842c72c322671543d612bf2ddc84db3a2465170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32e13395ffb6a45e1d23a10ee247909f76702aafdeab50d9e8f5f3a5d8aace7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://232f703ea88e40c06b47d5ee74c617bea65b7cd7812558e9b43d15901ea79c4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f5f5d34a71966c7b51b41395931ab364adc848ba3a5c402c6f69422ce9a7f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebee99f64026103d41986aed6b52b7607a822dbe76692c73c27840b4b99cab3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://214889e16ae000423925ac28b300557498e56f4150d9ac503ea6fd5499860913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8rsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c9l7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.823311 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.823354 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.823368 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.823393 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.823407 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:43Z","lastTransitionTime":"2026-01-20T03:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.827814 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-897rl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288aab6-09fa-40a3-8ff8-e00002a32d61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:24Z\\\",\\\"message\\\":\\\"2026-01-20T03:49:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e\\\\n2026-01-20T03:49:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_47c4d0c4-68c2-409e-8f26-b28e070ed65e to /host/opt/cni/bin/\\\\n2026-01-20T03:49:39Z [verbose] multus-daemon started\\\\n2026-01-20T03:49:39Z [verbose] Readiness Indicator file check\\\\n2026-01-20T03:50:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-897rl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.844829 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93f051c-f83c-4d27-a695-dd5a33e979f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttn4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hkf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.864950 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3ed14d-3486-4c46-96e2-39c8b7ec9429\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48279f6337a6e6148f8e03ce097deb0a505b4210d44d5c2fea9dcd7f3164470d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40cee044362690f56ae8c80421c14a175991c92e0ac8ccd6fe9c3b41187a443\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cd99fecfb5dc6906e25d3391f40f4fafddf9f92207d30e02e8fd3a72de1ff31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.883409 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e78a6ce2a30eac5284bdeac26c0f983d88106f436e1765bae0ba94513391e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22618856fab3f3ee0f12cf97a2ddcad92b3cbe660eb72c5ff7a48628ed698b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.904950 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a1574415620d9c1fe87e75a0b804049ac02d22250aa40e17746acdb9a9cefb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.924551 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c16dcd-5603-4f30-84d8-d0ad40124ac8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4357a76e77495fe84cc44f53161be1e9a51d2221f1b0b277caf9ace2c2d7d418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a0b28269257c5d2e0454d99e3303a19bdebebdc779a948bd83ae2496f1d349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.928208 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.928380 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.928404 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.928502 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.928579 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:43Z","lastTransitionTime":"2026-01-20T03:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.948379 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d5aca45-66c2-4eeb-a88f-442ff3055110\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ec1e3250d3d8af9557839c01e1380acf11cf709e2e82dc33b3535673a9148d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8a88996c1e04b8663be7236f941fd6ce25e83ee48b58323f076bb20f2346010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd8d65448fbb53fad11c8468fdf9055598f372ce3a28b3cda8b91146d926069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3fb93aba21de62b40349f418251d959ade6a322fe2a0a47cb99543831f8a5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.969785 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:43 crc kubenswrapper[4898]: I0120 03:50:43.987809 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:43Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.008397 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aef68392-4b9d-4a0c-a90e-8f04051fda21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0fc1c39c2620bbeaaefb025e6f713a4512f4aae1374da124e91079eb136bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z65r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cwlf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.032424 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.032564 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.032592 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.032637 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.032667 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:44Z","lastTransitionTime":"2026-01-20T03:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.047971 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91759377-eaa1-4bcf-99f3-bad12cd513c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T03:50:37Z\\\",\\\"message\\\":\\\"cyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:37.151566 6981 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 03:50:37.152039 6981 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 03:50:37.152113 6981 factory.go:656] Stopping watch factory\\\\nI0120 03:50:37.152128 6981 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 03:50:37.152667 6981 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:50:37.152817 6981 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:50:37.152828 6981 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 03:50:37.152840 6981 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 03:50:37.153354 6981 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T03:50:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g868\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzxwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.067847 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cc9r6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f739192d-fd16-4394-b6fc-742d14c876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff589bb4cfc7c6b022efe640ed6fbbfe70f2d25db478ac427e61d567d6da0e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9mbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cc9r6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.087162 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b457daf-f965-454f-b073-093908ec2385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f484c85b2e86a3b0e0b52623d9b62cc8cfc4b36af9a45a61b70656ec3e6d52f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970ed6cd458b652ad00377134fe4ca0b6ebdd2ca0eeb2e2657b5618e2f8d6a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrj8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nz5wd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.110965 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32151fe6-7890-4cfe-97f4-3b20648eb99a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aad8943057e7644eaf7875e564beaabee8b8214df4debe1adb4ba831c0003a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06458eb53c7338059070769b9795bdd578814139b9dc59cc2d1a6e5470616abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd3413012959ce6314c217f9b94fc92cdb1c248ecd9c65b2045128ac70a86bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb41bd68a21bb69d4d619072464386acf2a50103075d04d464345a467e27ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af92cbe10416e0a7aa97c79fee4f0502bb006f35d3b7022ba91e94ef0a0e920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e8e2b1b2b177019fb0e1d4a9c33600f1b640589738bb258e5e2ecc8a7b17845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc332fc8c96e6e8fd9e3b2c5b3fe358fdd830004329bcb74cea0687fd4b09e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39dbfdd7a06934f9d76e38f094e942bbdec0a05232fc462b15f7a7782c5d9bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T03:49:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T03:49:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.133650 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.137730 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.137809 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.137842 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.137877 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.137905 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:44Z","lastTransitionTime":"2026-01-20T03:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.151818 4898 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8tbv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"352ab345-1f5f-42e3-b57c-63eec90a7fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T03:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08f826fabf407e7afac77594ab7d1627efb77665ff032f4b26981937fa031a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T03:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vclz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T03:49:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8tbv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.240768 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.240853 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.240879 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.240916 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.240939 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:44Z","lastTransitionTime":"2026-01-20T03:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.344115 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.344192 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.344210 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.344243 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.344264 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:44Z","lastTransitionTime":"2026-01-20T03:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.448609 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.449042 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.449064 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.449092 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.449112 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:44Z","lastTransitionTime":"2026-01-20T03:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.553191 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.553260 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.553277 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.553305 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.553326 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:44Z","lastTransitionTime":"2026-01-20T03:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.656676 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.656748 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.656773 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.656801 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.656822 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:44Z","lastTransitionTime":"2026-01-20T03:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.713413 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:58:04.285863277 +0000 UTC Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.720861 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.720917 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.721050 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:44 crc kubenswrapper[4898]: E0120 03:50:44.721262 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:44 crc kubenswrapper[4898]: E0120 03:50:44.721457 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:44 crc kubenswrapper[4898]: E0120 03:50:44.721873 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.760276 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.760335 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.760353 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.760382 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.760404 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:44Z","lastTransitionTime":"2026-01-20T03:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.863992 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.864083 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.864108 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.864139 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.864157 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:44Z","lastTransitionTime":"2026-01-20T03:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.967326 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.967382 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.967399 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.967425 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:44 crc kubenswrapper[4898]: I0120 03:50:44.967469 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:44Z","lastTransitionTime":"2026-01-20T03:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.070057 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.070170 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.070192 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.070226 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.070249 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:45Z","lastTransitionTime":"2026-01-20T03:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.173473 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.173564 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.173585 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.173608 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.173625 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:45Z","lastTransitionTime":"2026-01-20T03:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.276584 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.276674 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.276695 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.276724 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.276745 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:45Z","lastTransitionTime":"2026-01-20T03:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.379840 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.379895 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.379912 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.379933 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.379952 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:45Z","lastTransitionTime":"2026-01-20T03:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.482914 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.482976 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.482988 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.483004 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.483015 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:45Z","lastTransitionTime":"2026-01-20T03:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.585602 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.585662 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.585679 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.585703 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.585720 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:45Z","lastTransitionTime":"2026-01-20T03:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.688380 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.688422 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.688452 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.688469 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.688480 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:45Z","lastTransitionTime":"2026-01-20T03:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.714108 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:12:24.025571448 +0000 UTC Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.720585 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:45 crc kubenswrapper[4898]: E0120 03:50:45.720825 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.791381 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.791502 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.791531 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.791566 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.791604 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:45Z","lastTransitionTime":"2026-01-20T03:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.895981 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.896350 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.896376 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.896466 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:45 crc kubenswrapper[4898]: I0120 03:50:45.896488 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:45Z","lastTransitionTime":"2026-01-20T03:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.000273 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.000356 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.000378 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.000411 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.000476 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.105353 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.105487 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.105511 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.105542 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.105565 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.209532 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.209612 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.209632 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.209666 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.209687 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.312218 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.312281 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.312297 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.312322 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.312342 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.415228 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.415269 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.415283 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.415299 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.415314 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.518604 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.518677 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.518695 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.518727 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.518750 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.621877 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.621952 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.621979 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.622027 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.622055 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.714482 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:03:12.164254149 +0000 UTC Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.720977 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.721025 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.721074 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:46 crc kubenswrapper[4898]: E0120 03:50:46.721199 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:46 crc kubenswrapper[4898]: E0120 03:50:46.721370 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:46 crc kubenswrapper[4898]: E0120 03:50:46.721620 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.725358 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.725417 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.725475 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.725501 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.725519 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.828693 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.828772 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.828793 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.828821 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.828843 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.844860 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.844929 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.844947 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.844971 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.844989 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: E0120 03:50:46.866651 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:46Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.873028 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.873079 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.873099 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.873120 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.873137 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: E0120 03:50:46.892036 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:46Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.897381 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.897497 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.897518 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.897546 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.897568 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: E0120 03:50:46.919137 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:46Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.924920 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.924989 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.925009 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.925041 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.925063 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: E0120 03:50:46.944410 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:46Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.949770 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.949818 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.949835 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.949863 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.949882 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:46 crc kubenswrapper[4898]: E0120 03:50:46.969399 4898 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T03:50:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d10a9157-1f00-4a30-b3ba-08cfa97c4549\\\",\\\"systemUUID\\\":\\\"143a7ca7-6529-4cf4-be5d-89f92f602735\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T03:50:46Z is after 2025-08-24T17:21:41Z" Jan 20 03:50:46 crc kubenswrapper[4898]: E0120 03:50:46.969658 4898 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.972281 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.972332 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.972351 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.972379 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:46 crc kubenswrapper[4898]: I0120 03:50:46.972399 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:46Z","lastTransitionTime":"2026-01-20T03:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.075395 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.075467 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.075477 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.075495 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.075510 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:47Z","lastTransitionTime":"2026-01-20T03:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.178068 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.178117 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.178129 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.178149 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.178164 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:47Z","lastTransitionTime":"2026-01-20T03:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.281073 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.281143 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.281161 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.281190 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.281210 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:47Z","lastTransitionTime":"2026-01-20T03:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.384575 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.384643 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.384661 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.384688 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.384708 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:47Z","lastTransitionTime":"2026-01-20T03:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.486987 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.487050 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.487063 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.487079 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.487117 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:47Z","lastTransitionTime":"2026-01-20T03:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.589510 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.589585 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.589606 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.589631 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.589653 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:47Z","lastTransitionTime":"2026-01-20T03:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.693078 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.693149 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.693170 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.693196 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.693216 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:47Z","lastTransitionTime":"2026-01-20T03:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.715567 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:51:07.574849879 +0000 UTC Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.721074 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:47 crc kubenswrapper[4898]: E0120 03:50:47.721268 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.796459 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.796630 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.796659 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.796688 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.796706 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:47Z","lastTransitionTime":"2026-01-20T03:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.899065 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.899130 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.899148 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.899178 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:47 crc kubenswrapper[4898]: I0120 03:50:47.899200 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:47Z","lastTransitionTime":"2026-01-20T03:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.002506 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.002570 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.002588 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.002614 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.002631 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:48Z","lastTransitionTime":"2026-01-20T03:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.105232 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.105298 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.105317 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.105344 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.105364 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:48Z","lastTransitionTime":"2026-01-20T03:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.208895 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.208971 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.208993 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.209022 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.209041 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:48Z","lastTransitionTime":"2026-01-20T03:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.312336 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.312398 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.312418 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.312472 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.312497 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:48Z","lastTransitionTime":"2026-01-20T03:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.416102 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.416179 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.416202 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.416235 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.416257 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:48Z","lastTransitionTime":"2026-01-20T03:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.519618 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.519676 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.519693 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.519712 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.519723 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:48Z","lastTransitionTime":"2026-01-20T03:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.623193 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.623263 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.623278 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.623305 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.623322 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:48Z","lastTransitionTime":"2026-01-20T03:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.716645 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:48:27.116314376 +0000 UTC Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.720944 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:48 crc kubenswrapper[4898]: E0120 03:50:48.721172 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.721265 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.721292 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:48 crc kubenswrapper[4898]: E0120 03:50:48.721801 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:48 crc kubenswrapper[4898]: E0120 03:50:48.721987 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.728634 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.728718 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.728767 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.728796 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.728815 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:48Z","lastTransitionTime":"2026-01-20T03:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.831811 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.831888 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.831906 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.831931 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.831949 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:48Z","lastTransitionTime":"2026-01-20T03:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.935641 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.935710 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.935728 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.935756 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:48 crc kubenswrapper[4898]: I0120 03:50:48.935774 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:48Z","lastTransitionTime":"2026-01-20T03:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.039973 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.040043 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.040063 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.040092 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.040117 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:49Z","lastTransitionTime":"2026-01-20T03:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.143288 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.143352 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.143372 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.143400 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.143426 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:49Z","lastTransitionTime":"2026-01-20T03:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.247064 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.247140 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.247165 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.247200 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.247228 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:49Z","lastTransitionTime":"2026-01-20T03:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.350845 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.350951 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.350970 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.351002 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.351020 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:49Z","lastTransitionTime":"2026-01-20T03:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.454602 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.454673 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.454694 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.454723 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.454742 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:49Z","lastTransitionTime":"2026-01-20T03:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.559385 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.559508 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.559532 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.559564 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.559586 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:49Z","lastTransitionTime":"2026-01-20T03:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.662926 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.663008 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.663027 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.663060 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.663084 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:49Z","lastTransitionTime":"2026-01-20T03:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.717083 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:20:37.098981688 +0000 UTC Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.720667 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:49 crc kubenswrapper[4898]: E0120 03:50:49.721275 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.766558 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.766642 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.766656 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.766677 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.766714 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:49Z","lastTransitionTime":"2026-01-20T03:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.869979 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.870029 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.870041 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.870062 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.870076 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:49Z","lastTransitionTime":"2026-01-20T03:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.973492 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.973556 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.973575 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.973600 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:49 crc kubenswrapper[4898]: I0120 03:50:49.973619 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:49Z","lastTransitionTime":"2026-01-20T03:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.076869 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.076945 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.076963 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.076993 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.077012 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:50Z","lastTransitionTime":"2026-01-20T03:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.181008 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.181095 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.181113 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.181139 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.181159 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:50Z","lastTransitionTime":"2026-01-20T03:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.284320 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.284382 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.284400 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.284423 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.284476 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:50Z","lastTransitionTime":"2026-01-20T03:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.387778 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.387866 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.387885 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.387920 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.387950 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:50Z","lastTransitionTime":"2026-01-20T03:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.490987 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.491069 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.491089 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.491123 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.491144 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:50Z","lastTransitionTime":"2026-01-20T03:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.593926 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.593993 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.594011 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.594036 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.594055 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:50Z","lastTransitionTime":"2026-01-20T03:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.696996 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.697051 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.697060 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.697076 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.697085 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:50Z","lastTransitionTime":"2026-01-20T03:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.718251 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 02:14:11.314699891 +0000 UTC Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.720713 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:50 crc kubenswrapper[4898]: E0120 03:50:50.720828 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.720720 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.720720 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:50 crc kubenswrapper[4898]: E0120 03:50:50.721046 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:50 crc kubenswrapper[4898]: E0120 03:50:50.721217 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.800515 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.800620 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.800639 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.800665 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.800683 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:50Z","lastTransitionTime":"2026-01-20T03:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.904873 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.904949 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.904970 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.904997 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:50 crc kubenswrapper[4898]: I0120 03:50:50.905020 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:50Z","lastTransitionTime":"2026-01-20T03:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.008045 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.008115 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.008135 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.008163 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.008181 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:51Z","lastTransitionTime":"2026-01-20T03:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.111129 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.111173 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.111189 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.111214 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.111231 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:51Z","lastTransitionTime":"2026-01-20T03:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.221872 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.221991 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.222013 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.222043 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.222070 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:51Z","lastTransitionTime":"2026-01-20T03:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.327060 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.327129 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.327147 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.327176 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.327199 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:51Z","lastTransitionTime":"2026-01-20T03:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.435288 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.435361 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.435384 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.435413 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.435468 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:51Z","lastTransitionTime":"2026-01-20T03:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.539197 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.539261 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.539279 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.539303 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.539322 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:51Z","lastTransitionTime":"2026-01-20T03:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.642506 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.642568 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.642580 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.642599 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.642612 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:51Z","lastTransitionTime":"2026-01-20T03:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.718503 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:49:55.287303652 +0000 UTC Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.720856 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:51 crc kubenswrapper[4898]: E0120 03:50:51.721045 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.745939 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.746161 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.746305 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.746489 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.746632 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:51Z","lastTransitionTime":"2026-01-20T03:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.850028 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.850080 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.850093 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.850110 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.850124 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:51Z","lastTransitionTime":"2026-01-20T03:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.953401 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.953556 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.953581 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.953611 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:51 crc kubenswrapper[4898]: I0120 03:50:51.953632 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:51Z","lastTransitionTime":"2026-01-20T03:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.056722 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.056759 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.056768 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.056782 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.056794 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:52Z","lastTransitionTime":"2026-01-20T03:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.159179 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.159221 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.159232 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.159248 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.159261 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:52Z","lastTransitionTime":"2026-01-20T03:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.261901 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.261942 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.261953 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.261968 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.261981 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:52Z","lastTransitionTime":"2026-01-20T03:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.364768 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.365263 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.365775 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.365864 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.365894 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:52Z","lastTransitionTime":"2026-01-20T03:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.469531 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.469596 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.469614 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.469639 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.469656 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:52Z","lastTransitionTime":"2026-01-20T03:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.572806 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.572882 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.572900 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.572923 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.572944 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:52Z","lastTransitionTime":"2026-01-20T03:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.675911 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.676065 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.676087 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.676119 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.676137 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:52Z","lastTransitionTime":"2026-01-20T03:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.719181 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:47:50.156613823 +0000 UTC Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.720467 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.720501 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:52 crc kubenswrapper[4898]: E0120 03:50:52.720636 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.720686 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:52 crc kubenswrapper[4898]: E0120 03:50:52.720759 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:52 crc kubenswrapper[4898]: E0120 03:50:52.720890 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.779570 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.779649 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.779670 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.779698 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.779718 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:52Z","lastTransitionTime":"2026-01-20T03:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.883176 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.883233 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.883253 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.883338 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.883410 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:52Z","lastTransitionTime":"2026-01-20T03:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.986777 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.986836 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.986855 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.986882 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:52 crc kubenswrapper[4898]: I0120 03:50:52.986898 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:52Z","lastTransitionTime":"2026-01-20T03:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.090417 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.090538 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.090564 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.090594 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.090614 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:53Z","lastTransitionTime":"2026-01-20T03:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.193894 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.193963 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.193979 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.194006 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.194030 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:53Z","lastTransitionTime":"2026-01-20T03:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.297873 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.297950 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.297968 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.297994 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.298019 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:53Z","lastTransitionTime":"2026-01-20T03:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.402420 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.402540 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.402565 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.402599 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.402627 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:53Z","lastTransitionTime":"2026-01-20T03:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.506195 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.506264 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.506285 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.506313 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.506332 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:53Z","lastTransitionTime":"2026-01-20T03:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.609042 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.609100 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.609120 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.609146 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.609163 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:53Z","lastTransitionTime":"2026-01-20T03:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.713008 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.713056 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.713068 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.713085 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.713097 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:53Z","lastTransitionTime":"2026-01-20T03:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.720303 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:57:59.984886854 +0000 UTC Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.720420 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:53 crc kubenswrapper[4898]: E0120 03:50:53.720681 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.721844 4898 scope.go:117] "RemoveContainer" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:50:53 crc kubenswrapper[4898]: E0120 03:50:53.722196 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.765040 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.765004284 podStartE2EDuration="22.765004284s" podCreationTimestamp="2026-01-20 03:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:53.74125752 +0000 UTC m=+100.341045389" watchObservedRunningTime="2026-01-20 03:50:53.765004284 +0000 UTC m=+100.364792183" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.792315 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.792285484 podStartE2EDuration="46.792285484s" podCreationTimestamp="2026-01-20 03:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:53.765485048 +0000 UTC m=+100.365272937" watchObservedRunningTime="2026-01-20 03:50:53.792285484 +0000 UTC m=+100.392073383" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.816599 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.816665 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.816685 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.816707 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.816731 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:53Z","lastTransitionTime":"2026-01-20T03:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.874271 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podStartSLOduration=76.874219261 podStartE2EDuration="1m16.874219261s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:53.836427649 +0000 UTC m=+100.436215548" watchObservedRunningTime="2026-01-20 03:50:53.874219261 +0000 UTC m=+100.474007170" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.889262 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-cc9r6" podStartSLOduration=76.889222118 podStartE2EDuration="1m16.889222118s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:53.888202417 +0000 UTC m=+100.487990286" watchObservedRunningTime="2026-01-20 03:50:53.889222118 +0000 UTC m=+100.489009997" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.906385 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nz5wd" podStartSLOduration=76.906367039 podStartE2EDuration="1m16.906367039s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:53.904873774 +0000 UTC m=+100.504661673" watchObservedRunningTime="2026-01-20 03:50:53.906367039 +0000 UTC m=+100.506154908" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.920131 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.920245 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.920330 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.920359 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.920406 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:53Z","lastTransitionTime":"2026-01-20T03:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.941519 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=82.941425368 podStartE2EDuration="1m22.941425368s" podCreationTimestamp="2026-01-20 03:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:53.940608853 +0000 UTC m=+100.540396752" watchObservedRunningTime="2026-01-20 03:50:53.941425368 +0000 UTC m=+100.541213277" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.970671 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8tbv5" podStartSLOduration=76.970650609 podStartE2EDuration="1m16.970650609s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:53.970611727 +0000 UTC m=+100.570399676" watchObservedRunningTime="2026-01-20 03:50:53.970650609 +0000 UTC m=+100.570438478" Jan 20 03:50:53 crc kubenswrapper[4898]: I0120 03:50:53.995602 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.995580668 podStartE2EDuration="1m21.995580668s" podCreationTimestamp="2026-01-20 03:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:53.994897577 +0000 UTC m=+100.594685476" watchObservedRunningTime="2026-01-20 03:50:53.995580668 +0000 UTC m=+100.595368537" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.024064 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.024193 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.024213 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.024295 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.024315 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:54Z","lastTransitionTime":"2026-01-20T03:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.046512 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-c9l7w" podStartSLOduration=77.046489628 podStartE2EDuration="1m17.046489628s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:54.044212259 +0000 UTC m=+100.644000158" watchObservedRunningTime="2026-01-20 03:50:54.046489628 +0000 UTC m=+100.646277527" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.065388 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-897rl" podStartSLOduration=77.065328432 podStartE2EDuration="1m17.065328432s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:54.062746454 +0000 UTC m=+100.662534353" watchObservedRunningTime="2026-01-20 03:50:54.065328432 +0000 UTC m=+100.665116331" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.098509 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.09841673 podStartE2EDuration="1m19.09841673s" podCreationTimestamp="2026-01-20 03:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:54.097228544 +0000 UTC m=+100.697016463" watchObservedRunningTime="2026-01-20 03:50:54.09841673 +0000 UTC m=+100.698204629" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.127968 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.128029 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.128041 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.128064 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.128080 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:54Z","lastTransitionTime":"2026-01-20T03:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.231567 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.231634 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.231655 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.231681 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.231702 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:54Z","lastTransitionTime":"2026-01-20T03:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.335567 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.335634 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.335652 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.335677 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.335696 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:54Z","lastTransitionTime":"2026-01-20T03:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.438501 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.438568 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.438584 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.438629 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.438646 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:54Z","lastTransitionTime":"2026-01-20T03:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.541987 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.542047 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.542068 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.542092 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.542109 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:54Z","lastTransitionTime":"2026-01-20T03:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.644450 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.644493 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.644501 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.644516 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.644527 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:54Z","lastTransitionTime":"2026-01-20T03:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.720784 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:35:29.254973581 +0000 UTC Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.721044 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.721097 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.721054 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:54 crc kubenswrapper[4898]: E0120 03:50:54.721275 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:54 crc kubenswrapper[4898]: E0120 03:50:54.721397 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:54 crc kubenswrapper[4898]: E0120 03:50:54.721525 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.747266 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.747318 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.747340 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.747376 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.747404 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:54Z","lastTransitionTime":"2026-01-20T03:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.850218 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.850299 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.850318 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.850343 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.850364 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:54Z","lastTransitionTime":"2026-01-20T03:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.954023 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.954093 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.954113 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.954142 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:54 crc kubenswrapper[4898]: I0120 03:50:54.954163 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:54Z","lastTransitionTime":"2026-01-20T03:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.057021 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.057109 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.057132 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.057162 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.057189 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:55Z","lastTransitionTime":"2026-01-20T03:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.160139 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.160213 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.160231 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.160258 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.160276 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:55Z","lastTransitionTime":"2026-01-20T03:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.292184 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.292260 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.292283 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.292314 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.292336 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:55Z","lastTransitionTime":"2026-01-20T03:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.395373 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.395458 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.395477 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.395500 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.395517 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:55Z","lastTransitionTime":"2026-01-20T03:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.498826 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.498879 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.498896 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.498917 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.498934 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:55Z","lastTransitionTime":"2026-01-20T03:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.602370 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.602425 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.602483 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.602507 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.602525 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:55Z","lastTransitionTime":"2026-01-20T03:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.705667 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.705740 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.705765 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.705796 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.705820 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:55Z","lastTransitionTime":"2026-01-20T03:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.720654 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.721033 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:04:16.393569139 +0000 UTC Jan 20 03:50:55 crc kubenswrapper[4898]: E0120 03:50:55.721081 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.809362 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.809425 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.809489 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.809521 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.809540 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:55Z","lastTransitionTime":"2026-01-20T03:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.912540 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.912619 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.912641 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.912663 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:55 crc kubenswrapper[4898]: I0120 03:50:55.912680 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:55Z","lastTransitionTime":"2026-01-20T03:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.015998 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.016047 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.016065 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.016087 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.016107 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:56Z","lastTransitionTime":"2026-01-20T03:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.100212 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:56 crc kubenswrapper[4898]: E0120 03:50:56.100388 4898 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:50:56 crc kubenswrapper[4898]: E0120 03:50:56.100506 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs podName:e93f051c-f83c-4d27-a695-dd5a33e979f4 nodeName:}" failed. No retries permitted until 2026-01-20 03:52:00.100482879 +0000 UTC m=+166.700270768 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs") pod "network-metrics-daemon-5hkf9" (UID: "e93f051c-f83c-4d27-a695-dd5a33e979f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.119617 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.119664 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.119686 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.119721 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.119743 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:56Z","lastTransitionTime":"2026-01-20T03:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.221601 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.221623 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.221631 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.221644 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.221652 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:56Z","lastTransitionTime":"2026-01-20T03:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.323805 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.323840 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.323853 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.323872 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.323884 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:56Z","lastTransitionTime":"2026-01-20T03:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.429336 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.429377 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.429390 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.429407 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.429420 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:56Z","lastTransitionTime":"2026-01-20T03:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.531593 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.531624 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.531634 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.531649 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.531658 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:56Z","lastTransitionTime":"2026-01-20T03:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.634146 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.634203 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.634222 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.634245 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.634262 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:56Z","lastTransitionTime":"2026-01-20T03:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.720931 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.721014 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:56 crc kubenswrapper[4898]: E0120 03:50:56.721056 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.720932 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.721138 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:33:08.055695489 +0000 UTC Jan 20 03:50:56 crc kubenswrapper[4898]: E0120 03:50:56.721197 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:56 crc kubenswrapper[4898]: E0120 03:50:56.721325 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.737124 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.737176 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.737193 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.737215 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.737232 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:56Z","lastTransitionTime":"2026-01-20T03:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.839992 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.840061 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.840084 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.840114 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.840136 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:56Z","lastTransitionTime":"2026-01-20T03:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.942269 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.942299 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.942308 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.942321 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:56 crc kubenswrapper[4898]: I0120 03:50:56.942330 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:56Z","lastTransitionTime":"2026-01-20T03:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.044904 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.044948 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.044962 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.044980 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.044991 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:57Z","lastTransitionTime":"2026-01-20T03:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.173852 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.173899 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.173910 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.173929 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.173940 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:57Z","lastTransitionTime":"2026-01-20T03:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.276363 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.276467 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.276494 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.276524 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.276549 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:57Z","lastTransitionTime":"2026-01-20T03:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.356785 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.356896 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.356955 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.356981 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.357002 4898 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:50:57Z","lastTransitionTime":"2026-01-20T03:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.419750 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w"] Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.420340 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.422505 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.423468 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.423947 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.424725 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.513652 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6712c184-bfae-445c-8511-2568248b4b87-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.513724 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6712c184-bfae-445c-8511-2568248b4b87-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.513792 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6712c184-bfae-445c-8511-2568248b4b87-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.513838 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6712c184-bfae-445c-8511-2568248b4b87-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.513878 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6712c184-bfae-445c-8511-2568248b4b87-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.615023 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6712c184-bfae-445c-8511-2568248b4b87-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.615158 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6712c184-bfae-445c-8511-2568248b4b87-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.615227 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6712c184-bfae-445c-8511-2568248b4b87-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.615305 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6712c184-bfae-445c-8511-2568248b4b87-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.615376 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6712c184-bfae-445c-8511-2568248b4b87-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.615541 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6712c184-bfae-445c-8511-2568248b4b87-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.615404 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6712c184-bfae-445c-8511-2568248b4b87-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.617293 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6712c184-bfae-445c-8511-2568248b4b87-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.626631 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6712c184-bfae-445c-8511-2568248b4b87-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.634611 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6712c184-bfae-445c-8511-2568248b4b87-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p4m8w\" (UID: \"6712c184-bfae-445c-8511-2568248b4b87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.720645 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:57 crc kubenswrapper[4898]: E0120 03:50:57.720865 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.721772 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:22:05.717199756 +0000 UTC Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.721843 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.735512 4898 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 03:50:57 crc kubenswrapper[4898]: I0120 03:50:57.745598 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" Jan 20 03:50:58 crc kubenswrapper[4898]: I0120 03:50:58.415496 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" event={"ID":"6712c184-bfae-445c-8511-2568248b4b87","Type":"ContainerStarted","Data":"12847436e3cc6d962e2f7de786b9949d8de6b43fbf20ccabb92e32805fad8c53"} Jan 20 03:50:58 crc kubenswrapper[4898]: I0120 03:50:58.416113 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" event={"ID":"6712c184-bfae-445c-8511-2568248b4b87","Type":"ContainerStarted","Data":"97527417404e8b85c01534ba4e5968272265a655de44c126de9c2f7cd2207641"} Jan 20 03:50:58 crc kubenswrapper[4898]: I0120 03:50:58.444710 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p4m8w" podStartSLOduration=81.444676378 podStartE2EDuration="1m21.444676378s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:50:58.44112946 +0000 UTC m=+105.040917339" watchObservedRunningTime="2026-01-20 03:50:58.444676378 +0000 UTC m=+105.044464267" Jan 20 03:50:58 crc kubenswrapper[4898]: I0120 03:50:58.720786 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:50:58 crc kubenswrapper[4898]: I0120 03:50:58.720798 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:50:58 crc kubenswrapper[4898]: I0120 03:50:58.720944 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:50:58 crc kubenswrapper[4898]: E0120 03:50:58.721259 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:50:58 crc kubenswrapper[4898]: E0120 03:50:58.721590 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:50:58 crc kubenswrapper[4898]: E0120 03:50:58.721827 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:50:59 crc kubenswrapper[4898]: I0120 03:50:59.721414 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:50:59 crc kubenswrapper[4898]: E0120 03:50:59.721800 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:00 crc kubenswrapper[4898]: I0120 03:51:00.720245 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:00 crc kubenswrapper[4898]: I0120 03:51:00.720245 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:00 crc kubenswrapper[4898]: I0120 03:51:00.720247 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:00 crc kubenswrapper[4898]: E0120 03:51:00.720408 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:00 crc kubenswrapper[4898]: E0120 03:51:00.720729 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:00 crc kubenswrapper[4898]: E0120 03:51:00.720780 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:01 crc kubenswrapper[4898]: I0120 03:51:01.720626 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:01 crc kubenswrapper[4898]: E0120 03:51:01.720796 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:02 crc kubenswrapper[4898]: I0120 03:51:02.720766 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:02 crc kubenswrapper[4898]: I0120 03:51:02.720840 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:02 crc kubenswrapper[4898]: E0120 03:51:02.720900 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:02 crc kubenswrapper[4898]: I0120 03:51:02.720853 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:02 crc kubenswrapper[4898]: E0120 03:51:02.721019 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:02 crc kubenswrapper[4898]: E0120 03:51:02.721225 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:03 crc kubenswrapper[4898]: I0120 03:51:03.720750 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:03 crc kubenswrapper[4898]: E0120 03:51:03.722009 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:04 crc kubenswrapper[4898]: I0120 03:51:04.720277 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:04 crc kubenswrapper[4898]: I0120 03:51:04.720369 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:04 crc kubenswrapper[4898]: I0120 03:51:04.720446 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:04 crc kubenswrapper[4898]: E0120 03:51:04.720759 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:04 crc kubenswrapper[4898]: E0120 03:51:04.720846 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:04 crc kubenswrapper[4898]: E0120 03:51:04.720932 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:05 crc kubenswrapper[4898]: I0120 03:51:05.720996 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:05 crc kubenswrapper[4898]: E0120 03:51:05.721198 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:05 crc kubenswrapper[4898]: I0120 03:51:05.722361 4898 scope.go:117] "RemoveContainer" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:51:05 crc kubenswrapper[4898]: E0120 03:51:05.722660 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" Jan 20 03:51:06 crc kubenswrapper[4898]: I0120 03:51:06.720889 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:06 crc kubenswrapper[4898]: I0120 03:51:06.720925 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:06 crc kubenswrapper[4898]: I0120 03:51:06.720940 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:06 crc kubenswrapper[4898]: E0120 03:51:06.721125 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:06 crc kubenswrapper[4898]: E0120 03:51:06.721242 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:06 crc kubenswrapper[4898]: E0120 03:51:06.721374 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:07 crc kubenswrapper[4898]: I0120 03:51:07.720479 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:07 crc kubenswrapper[4898]: E0120 03:51:07.720744 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:08 crc kubenswrapper[4898]: I0120 03:51:08.720984 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:08 crc kubenswrapper[4898]: I0120 03:51:08.720987 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:08 crc kubenswrapper[4898]: E0120 03:51:08.721144 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:08 crc kubenswrapper[4898]: E0120 03:51:08.721237 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:08 crc kubenswrapper[4898]: I0120 03:51:08.721130 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:08 crc kubenswrapper[4898]: E0120 03:51:08.721359 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:09 crc kubenswrapper[4898]: I0120 03:51:09.721216 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:09 crc kubenswrapper[4898]: E0120 03:51:09.721719 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:10 crc kubenswrapper[4898]: I0120 03:51:10.455588 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-897rl_1288aab6-09fa-40a3-8ff8-e00002a32d61/kube-multus/1.log" Jan 20 03:51:10 crc kubenswrapper[4898]: I0120 03:51:10.456674 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-897rl_1288aab6-09fa-40a3-8ff8-e00002a32d61/kube-multus/0.log" Jan 20 03:51:10 crc kubenswrapper[4898]: I0120 03:51:10.456758 4898 generic.go:334] "Generic (PLEG): container finished" podID="1288aab6-09fa-40a3-8ff8-e00002a32d61" containerID="a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b" exitCode=1 Jan 20 03:51:10 crc kubenswrapper[4898]: I0120 03:51:10.456807 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-897rl" event={"ID":"1288aab6-09fa-40a3-8ff8-e00002a32d61","Type":"ContainerDied","Data":"a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b"} Jan 20 03:51:10 crc kubenswrapper[4898]: I0120 03:51:10.456864 4898 scope.go:117] "RemoveContainer" containerID="61be8a3aa1d2284dec0e7a9dc34de9e4124fafa1417c57a8ef6bbfa7b2c976d7" Jan 20 03:51:10 crc kubenswrapper[4898]: I0120 03:51:10.459104 4898 scope.go:117] "RemoveContainer" containerID="a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b" Jan 20 03:51:10 crc kubenswrapper[4898]: E0120 03:51:10.459563 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-897rl_openshift-multus(1288aab6-09fa-40a3-8ff8-e00002a32d61)\"" pod="openshift-multus/multus-897rl" podUID="1288aab6-09fa-40a3-8ff8-e00002a32d61" Jan 20 03:51:10 crc kubenswrapper[4898]: I0120 03:51:10.721156 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:10 crc kubenswrapper[4898]: E0120 03:51:10.721311 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:10 crc kubenswrapper[4898]: I0120 03:51:10.721380 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:10 crc kubenswrapper[4898]: E0120 03:51:10.721582 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:10 crc kubenswrapper[4898]: I0120 03:51:10.722661 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:10 crc kubenswrapper[4898]: E0120 03:51:10.722814 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:11 crc kubenswrapper[4898]: I0120 03:51:11.462840 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-897rl_1288aab6-09fa-40a3-8ff8-e00002a32d61/kube-multus/1.log" Jan 20 03:51:11 crc kubenswrapper[4898]: I0120 03:51:11.720842 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:11 crc kubenswrapper[4898]: E0120 03:51:11.721053 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:12 crc kubenswrapper[4898]: I0120 03:51:12.720663 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:12 crc kubenswrapper[4898]: I0120 03:51:12.720761 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:12 crc kubenswrapper[4898]: I0120 03:51:12.720681 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:12 crc kubenswrapper[4898]: E0120 03:51:12.720857 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:12 crc kubenswrapper[4898]: E0120 03:51:12.721016 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:12 crc kubenswrapper[4898]: E0120 03:51:12.721194 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:13 crc kubenswrapper[4898]: E0120 03:51:13.705548 4898 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 20 03:51:13 crc kubenswrapper[4898]: I0120 03:51:13.721036 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:13 crc kubenswrapper[4898]: E0120 03:51:13.723643 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:13 crc kubenswrapper[4898]: E0120 03:51:13.838240 4898 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 03:51:14 crc kubenswrapper[4898]: I0120 03:51:14.720892 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:14 crc kubenswrapper[4898]: I0120 03:51:14.720951 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:14 crc kubenswrapper[4898]: I0120 03:51:14.720968 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:14 crc kubenswrapper[4898]: E0120 03:51:14.721455 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:14 crc kubenswrapper[4898]: E0120 03:51:14.721320 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:14 crc kubenswrapper[4898]: E0120 03:51:14.721585 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:15 crc kubenswrapper[4898]: I0120 03:51:15.721195 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:15 crc kubenswrapper[4898]: E0120 03:51:15.721392 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:16 crc kubenswrapper[4898]: I0120 03:51:16.720513 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:16 crc kubenswrapper[4898]: I0120 03:51:16.721049 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:16 crc kubenswrapper[4898]: I0120 03:51:16.721144 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:16 crc kubenswrapper[4898]: E0120 03:51:16.721390 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:16 crc kubenswrapper[4898]: E0120 03:51:16.721505 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:16 crc kubenswrapper[4898]: E0120 03:51:16.721596 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:16 crc kubenswrapper[4898]: I0120 03:51:16.721805 4898 scope.go:117] "RemoveContainer" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:51:16 crc kubenswrapper[4898]: E0120 03:51:16.722041 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzxwz_openshift-ovn-kubernetes(91759377-eaa1-4bcf-99f3-bad12cd513c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" Jan 20 03:51:17 crc kubenswrapper[4898]: I0120 03:51:17.721352 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:17 crc kubenswrapper[4898]: E0120 03:51:17.721617 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:18 crc kubenswrapper[4898]: I0120 03:51:18.720651 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:18 crc kubenswrapper[4898]: I0120 03:51:18.720739 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:18 crc kubenswrapper[4898]: I0120 03:51:18.720657 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:18 crc kubenswrapper[4898]: E0120 03:51:18.720869 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:18 crc kubenswrapper[4898]: E0120 03:51:18.721014 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:18 crc kubenswrapper[4898]: E0120 03:51:18.721179 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:18 crc kubenswrapper[4898]: E0120 03:51:18.840014 4898 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 03:51:19 crc kubenswrapper[4898]: I0120 03:51:19.720406 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:19 crc kubenswrapper[4898]: E0120 03:51:19.720638 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:20 crc kubenswrapper[4898]: I0120 03:51:20.721120 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:20 crc kubenswrapper[4898]: I0120 03:51:20.721205 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:20 crc kubenswrapper[4898]: E0120 03:51:20.721325 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:20 crc kubenswrapper[4898]: I0120 03:51:20.721351 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:20 crc kubenswrapper[4898]: E0120 03:51:20.721553 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:20 crc kubenswrapper[4898]: E0120 03:51:20.721720 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:21 crc kubenswrapper[4898]: I0120 03:51:21.720992 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:21 crc kubenswrapper[4898]: E0120 03:51:21.721342 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:22 crc kubenswrapper[4898]: I0120 03:51:22.720604 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:22 crc kubenswrapper[4898]: I0120 03:51:22.720643 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:22 crc kubenswrapper[4898]: E0120 03:51:22.720835 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:22 crc kubenswrapper[4898]: I0120 03:51:22.720922 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:22 crc kubenswrapper[4898]: E0120 03:51:22.721075 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:22 crc kubenswrapper[4898]: E0120 03:51:22.721383 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:22 crc kubenswrapper[4898]: I0120 03:51:22.721615 4898 scope.go:117] "RemoveContainer" containerID="a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b" Jan 20 03:51:23 crc kubenswrapper[4898]: I0120 03:51:23.515383 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-897rl_1288aab6-09fa-40a3-8ff8-e00002a32d61/kube-multus/1.log" Jan 20 03:51:23 crc kubenswrapper[4898]: I0120 03:51:23.515500 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-897rl" event={"ID":"1288aab6-09fa-40a3-8ff8-e00002a32d61","Type":"ContainerStarted","Data":"9336477b9ee7e461f6c87e45d05e59a86b8d817b5945c3fa18c56fe5734ab967"} Jan 20 03:51:23 crc kubenswrapper[4898]: I0120 03:51:23.721221 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:23 crc kubenswrapper[4898]: E0120 03:51:23.723272 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:23 crc kubenswrapper[4898]: E0120 03:51:23.840886 4898 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 03:51:24 crc kubenswrapper[4898]: I0120 03:51:24.720183 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:24 crc kubenswrapper[4898]: I0120 03:51:24.720213 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:24 crc kubenswrapper[4898]: I0120 03:51:24.720290 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:24 crc kubenswrapper[4898]: E0120 03:51:24.720317 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:24 crc kubenswrapper[4898]: E0120 03:51:24.720468 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:24 crc kubenswrapper[4898]: E0120 03:51:24.720632 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:25 crc kubenswrapper[4898]: I0120 03:51:25.721237 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:25 crc kubenswrapper[4898]: E0120 03:51:25.721513 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:26 crc kubenswrapper[4898]: I0120 03:51:26.721104 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:26 crc kubenswrapper[4898]: I0120 03:51:26.721171 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:26 crc kubenswrapper[4898]: E0120 03:51:26.721291 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:26 crc kubenswrapper[4898]: I0120 03:51:26.721317 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:26 crc kubenswrapper[4898]: E0120 03:51:26.721648 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:26 crc kubenswrapper[4898]: E0120 03:51:26.721762 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:27 crc kubenswrapper[4898]: I0120 03:51:27.720498 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:27 crc kubenswrapper[4898]: E0120 03:51:27.720758 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:28 crc kubenswrapper[4898]: I0120 03:51:28.720630 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:28 crc kubenswrapper[4898]: I0120 03:51:28.720699 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:28 crc kubenswrapper[4898]: E0120 03:51:28.720815 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:28 crc kubenswrapper[4898]: I0120 03:51:28.720870 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:28 crc kubenswrapper[4898]: E0120 03:51:28.720975 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:28 crc kubenswrapper[4898]: E0120 03:51:28.721068 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:28 crc kubenswrapper[4898]: E0120 03:51:28.843056 4898 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 03:51:29 crc kubenswrapper[4898]: I0120 03:51:29.721365 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:29 crc kubenswrapper[4898]: E0120 03:51:29.721574 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:30 crc kubenswrapper[4898]: I0120 03:51:30.720403 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:30 crc kubenswrapper[4898]: I0120 03:51:30.720481 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:30 crc kubenswrapper[4898]: I0120 03:51:30.720536 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:30 crc kubenswrapper[4898]: E0120 03:51:30.721368 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:30 crc kubenswrapper[4898]: E0120 03:51:30.721542 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:30 crc kubenswrapper[4898]: I0120 03:51:30.721978 4898 scope.go:117] "RemoveContainer" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:51:30 crc kubenswrapper[4898]: E0120 03:51:30.722013 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:31 crc kubenswrapper[4898]: I0120 03:51:31.548167 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/3.log" Jan 20 03:51:31 crc kubenswrapper[4898]: I0120 03:51:31.551775 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerStarted","Data":"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d"} Jan 20 03:51:31 crc kubenswrapper[4898]: I0120 03:51:31.552361 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:51:31 crc kubenswrapper[4898]: I0120 03:51:31.595826 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podStartSLOduration=114.595805153 podStartE2EDuration="1m54.595805153s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:31.593196062 +0000 UTC m=+138.192983931" watchObservedRunningTime="2026-01-20 03:51:31.595805153 +0000 UTC m=+138.195593022" Jan 20 03:51:31 crc kubenswrapper[4898]: I0120 03:51:31.647579 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5hkf9"] Jan 20 03:51:31 crc kubenswrapper[4898]: I0120 03:51:31.647757 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:31 crc kubenswrapper[4898]: E0120 03:51:31.647876 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:32 crc kubenswrapper[4898]: I0120 03:51:32.720475 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:32 crc kubenswrapper[4898]: I0120 03:51:32.720512 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:32 crc kubenswrapper[4898]: I0120 03:51:32.720485 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:32 crc kubenswrapper[4898]: E0120 03:51:32.720663 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:32 crc kubenswrapper[4898]: E0120 03:51:32.720820 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:32 crc kubenswrapper[4898]: E0120 03:51:32.720981 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:33 crc kubenswrapper[4898]: I0120 03:51:33.721137 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:33 crc kubenswrapper[4898]: E0120 03:51:33.723051 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:33 crc kubenswrapper[4898]: E0120 03:51:33.843732 4898 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 03:51:34 crc kubenswrapper[4898]: I0120 03:51:34.720512 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:34 crc kubenswrapper[4898]: I0120 03:51:34.720649 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:34 crc kubenswrapper[4898]: E0120 03:51:34.720691 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:34 crc kubenswrapper[4898]: I0120 03:51:34.720737 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:34 crc kubenswrapper[4898]: E0120 03:51:34.720832 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:34 crc kubenswrapper[4898]: E0120 03:51:34.721510 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:35 crc kubenswrapper[4898]: I0120 03:51:35.721048 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:35 crc kubenswrapper[4898]: E0120 03:51:35.721290 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:36 crc kubenswrapper[4898]: I0120 03:51:36.720337 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:36 crc kubenswrapper[4898]: I0120 03:51:36.720447 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:36 crc kubenswrapper[4898]: I0120 03:51:36.720481 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:36 crc kubenswrapper[4898]: E0120 03:51:36.720559 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:36 crc kubenswrapper[4898]: E0120 03:51:36.720697 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:36 crc kubenswrapper[4898]: E0120 03:51:36.720808 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:37 crc kubenswrapper[4898]: I0120 03:51:37.720900 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:37 crc kubenswrapper[4898]: E0120 03:51:37.721112 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hkf9" podUID="e93f051c-f83c-4d27-a695-dd5a33e979f4" Jan 20 03:51:38 crc kubenswrapper[4898]: I0120 03:51:38.721111 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:38 crc kubenswrapper[4898]: I0120 03:51:38.721159 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:38 crc kubenswrapper[4898]: I0120 03:51:38.721159 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:38 crc kubenswrapper[4898]: E0120 03:51:38.721309 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 03:51:38 crc kubenswrapper[4898]: E0120 03:51:38.721426 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 03:51:38 crc kubenswrapper[4898]: E0120 03:51:38.721549 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 03:51:39 crc kubenswrapper[4898]: I0120 03:51:39.720386 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:51:39 crc kubenswrapper[4898]: I0120 03:51:39.723773 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 03:51:39 crc kubenswrapper[4898]: I0120 03:51:39.724341 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 03:51:39 crc kubenswrapper[4898]: I0120 03:51:39.976378 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 03:51:39 crc kubenswrapper[4898]: I0120 03:51:39.976526 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.684690 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.684867 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:40 crc kubenswrapper[4898]: E0120 03:51:40.684962 4898 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:51:40 crc kubenswrapper[4898]: E0120 03:51:40.684967 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:53:42.684921222 +0000 UTC m=+269.284709111 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:40 crc kubenswrapper[4898]: E0120 03:51:40.685219 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 03:53:42.685162639 +0000 UTC m=+269.284950538 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.720919 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.720919 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.721186 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.725007 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.725192 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.725229 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.726044 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.791269 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.791412 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.792532 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.797316 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.797348 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:40 crc kubenswrapper[4898]: I0120 03:51:40.798117 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:41 crc kubenswrapper[4898]: I0120 03:51:41.045349 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 03:51:41 crc kubenswrapper[4898]: I0120 03:51:41.070197 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:41 crc kubenswrapper[4898]: W0120 03:51:41.277949 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-57d91274ea20733fddc1f7463cb7ef9be8d93d69d77ad86e0a1292bda4540b27 WatchSource:0}: Error finding container 57d91274ea20733fddc1f7463cb7ef9be8d93d69d77ad86e0a1292bda4540b27: Status 404 returned error can't find the container with id 57d91274ea20733fddc1f7463cb7ef9be8d93d69d77ad86e0a1292bda4540b27 Jan 20 03:51:41 crc kubenswrapper[4898]: W0120 03:51:41.299144 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-f30365d071b39617b2297d4ee24574d21f5c0394989a07c8f7c4b08effbc2248 WatchSource:0}: Error finding container f30365d071b39617b2297d4ee24574d21f5c0394989a07c8f7c4b08effbc2248: Status 404 returned error can't find the container with id f30365d071b39617b2297d4ee24574d21f5c0394989a07c8f7c4b08effbc2248 Jan 20 03:51:41 crc kubenswrapper[4898]: I0120 03:51:41.592915 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"574438689fa13a76ea6cc0f9196c7bfc3b988dda71234e9ca30fdade6ec4a13a"} Jan 20 03:51:41 crc kubenswrapper[4898]: I0120 03:51:41.592976 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"57d91274ea20733fddc1f7463cb7ef9be8d93d69d77ad86e0a1292bda4540b27"} Jan 20 03:51:41 crc kubenswrapper[4898]: I0120 03:51:41.593955 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:51:41 crc kubenswrapper[4898]: I0120 03:51:41.595789 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c23b3f73f7feedc8ff2218fc75c79c20f48b03eb50da0bdf3b2d9ed6e385431c"} Jan 20 03:51:41 crc kubenswrapper[4898]: I0120 03:51:41.595832 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f30365d071b39617b2297d4ee24574d21f5c0394989a07c8f7c4b08effbc2248"} Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.017965 4898 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.077036 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ff5xs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.077989 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.078563 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l9cpd"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.079365 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.086712 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.087331 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.087826 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.088198 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.088725 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hd2t9"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.090184 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.118792 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.119214 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.119327 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.119653 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.120003 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.121060 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.121061 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.125215 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-n8lsh"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.125621 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.125630 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.125889 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.125974 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.126144 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.126689 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.126909 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.127303 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.127635 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.127676 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.127644 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.127889 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.128738 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.129230 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.129309 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hlwdq"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.129350 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.129508 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.131063 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.131233 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.131304 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.134082 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zkb77"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.134209 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.134460 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m85tn"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.134820 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.135186 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.135274 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.135406 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.135485 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.135662 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.136686 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-g8rbm"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.137062 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.137800 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.138324 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.138624 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.144357 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.144141 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.145320 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.145493 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.145612 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.145907 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bpvpw"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.146423 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.148392 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.148716 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.148449 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.148485 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.149221 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.157238 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.149258 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.149626 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.149710 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.154263 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.154319 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.154607 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.154826 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.154897 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155014 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155085 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155127 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155183 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155328 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155398 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155455 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155480 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155512 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155583 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155636 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155732 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155774 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.155859 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.156122 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.156163 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.156212 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.156243 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.156307 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.159505 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.167483 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.172066 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tbzbw"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.173361 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tbzbw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.177970 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.178454 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.178480 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.179186 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.179531 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.179550 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.179744 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.179760 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.179927 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.180110 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.180299 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.182668 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-config\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.182726 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f35df32f-6245-445e-95c6-c419d45ab949-config\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.182777 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/620006cf-5c3f-457c-a416-30384cf951ec-audit-dir\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.182807 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7qmn\" (UniqueName: \"kubernetes.io/projected/f35df32f-6245-445e-95c6-c419d45ab949-kube-api-access-w7qmn\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.182836 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.182858 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-audit-policies\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.182877 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-etcd-client\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183232 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183271 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb731558-acf5-4738-b505-c7ab65dbc2cf-auth-proxy-config\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183302 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpjbp\" (UniqueName: \"kubernetes.io/projected/4fa80055-6c27-434c-b6b3-166af5828101-kube-api-access-bpjbp\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183329 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5d94\" (UniqueName: \"kubernetes.io/projected/12b38f53-df50-4c41-bb9a-c4922ce023b2-kube-api-access-n5d94\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183365 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183399 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183447 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-serving-cert\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183469 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/baaced9e-4d77-491b-8898-028c9925a5c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m85tn\" (UID: \"baaced9e-4d77-491b-8898-028c9925a5c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183492 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183512 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baaced9e-4d77-491b-8898-028c9925a5c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-m85tn\" (UID: \"baaced9e-4d77-491b-8898-028c9925a5c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183531 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-etcd-serving-ca\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183549 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/395759b1-2c0e-4592-9b92-afb458e31327-etcd-client\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183565 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b38f53-df50-4c41-bb9a-c4922ce023b2-serving-cert\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183587 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/395759b1-2c0e-4592-9b92-afb458e31327-node-pullsecrets\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183607 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183628 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpgx7\" (UniqueName: \"kubernetes.io/projected/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-kube-api-access-wpgx7\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183646 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-audit\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183669 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183687 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183731 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183751 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-images\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183771 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk576\" (UniqueName: \"kubernetes.io/projected/3a9b8d7c-e836-4661-856d-5a0e8276387e-kube-api-access-jk576\") pod \"dns-operator-744455d44c-g8rbm\" (UID: \"3a9b8d7c-e836-4661-856d-5a0e8276387e\") " pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183791 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa80055-6c27-434c-b6b3-166af5828101-serving-cert\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183814 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fttgw\" (UniqueName: \"kubernetes.io/projected/620006cf-5c3f-457c-a416-30384cf951ec-kube-api-access-fttgw\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183835 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f35df32f-6245-445e-95c6-c419d45ab949-serving-cert\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183854 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fb731558-acf5-4738-b505-c7ab65dbc2cf-machine-approver-tls\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.183873 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-audit-policies\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.184188 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ctlvr"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.189694 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-encryption-config\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.189721 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.189774 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.189824 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzqz\" (UniqueName: \"kubernetes.io/projected/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-kube-api-access-6fzqz\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.189862 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhdgl\" (UniqueName: \"kubernetes.io/projected/fb731558-acf5-4738-b505-c7ab65dbc2cf-kube-api-access-rhdgl\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.189900 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a703a3d0-8ee9-4319-b2e0-0e0292eb8d98-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bn8jb\" (UID: \"a703a3d0-8ee9-4319-b2e0-0e0292eb8d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.189934 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.189821 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.189967 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzqbt\" (UniqueName: \"kubernetes.io/projected/f5a9727e-9d16-4c1c-9279-ab4bb06fd41d-kube-api-access-bzqbt\") pod \"openshift-apiserver-operator-796bbdcf4f-vntvs\" (UID: \"f5a9727e-9d16-4c1c-9279-ab4bb06fd41d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.190002 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.190034 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.190069 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-image-import-ca\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.190096 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.190128 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-audit-dir\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.190152 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-config\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.190581 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-config\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.205777 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206270 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206321 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f35df32f-6245-445e-95c6-c419d45ab949-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206343 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12b38f53-df50-4c41-bb9a-c4922ce023b2-trusted-ca\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206370 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/395759b1-2c0e-4592-9b92-afb458e31327-serving-cert\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206394 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/395759b1-2c0e-4592-9b92-afb458e31327-audit-dir\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206416 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62zbx\" (UniqueName: \"kubernetes.io/projected/395759b1-2c0e-4592-9b92-afb458e31327-kube-api-access-62zbx\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206421 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206466 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b38f53-df50-4c41-bb9a-c4922ce023b2-config\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206483 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206501 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvcn\" (UniqueName: \"kubernetes.io/projected/baaced9e-4d77-491b-8898-028c9925a5c2-kube-api-access-fhvcn\") pod \"openshift-config-operator-7777fb866f-m85tn\" (UID: \"baaced9e-4d77-491b-8898-028c9925a5c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206519 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb731558-acf5-4738-b505-c7ab65dbc2cf-config\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206547 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grsgl\" (UniqueName: \"kubernetes.io/projected/a703a3d0-8ee9-4319-b2e0-0e0292eb8d98-kube-api-access-grsgl\") pod \"cluster-samples-operator-665b6dd947-bn8jb\" (UID: \"a703a3d0-8ee9-4319-b2e0-0e0292eb8d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206563 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5a9727e-9d16-4c1c-9279-ab4bb06fd41d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vntvs\" (UID: \"f5a9727e-9d16-4c1c-9279-ab4bb06fd41d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206580 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5a9727e-9d16-4c1c-9279-ab4bb06fd41d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vntvs\" (UID: \"f5a9727e-9d16-4c1c-9279-ab4bb06fd41d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206599 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-client-ca\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206613 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f35df32f-6245-445e-95c6-c419d45ab949-service-ca-bundle\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206628 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/395759b1-2c0e-4592-9b92-afb458e31327-encryption-config\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206643 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a9b8d7c-e836-4661-856d-5a0e8276387e-metrics-tls\") pod \"dns-operator-744455d44c-g8rbm\" (UID: \"3a9b8d7c-e836-4661-856d-5a0e8276387e\") " pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.206807 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.208187 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.208558 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.208835 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.218221 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.218967 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.219112 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.219699 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.219776 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.219855 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.225072 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.226515 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.229873 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.230689 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.232409 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.233044 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.234419 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.234826 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.234848 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.234966 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.234994 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.234968 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.235095 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.235232 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.235249 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.235393 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.235604 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.237098 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.237571 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.238192 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kfxts"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.238258 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.239247 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.242690 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-bmzz9"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.243284 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.243362 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.243956 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.244238 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.244661 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.247774 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.250878 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.251573 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.254354 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.262117 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.263242 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hv5vs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.264020 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.265584 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.266520 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.266810 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.268369 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.272074 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.272842 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.273938 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.274846 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.276271 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.277404 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.279361 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.280399 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.284603 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.287410 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bsjcr"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.287607 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.289122 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.289289 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.289657 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.289747 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.290207 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.292765 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vknkf"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.293417 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hd2t9"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.293501 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.293809 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m85tn"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.296324 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-n8lsh"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.297664 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.299295 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l9cpd"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.299374 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zkb77"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.300346 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.301030 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.301763 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tbzbw"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.303038 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.304027 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.305644 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.306959 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307297 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5d94\" (UniqueName: \"kubernetes.io/projected/12b38f53-df50-4c41-bb9a-c4922ce023b2-kube-api-access-n5d94\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307329 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307354 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307371 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-serving-cert\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307389 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baaced9e-4d77-491b-8898-028c9925a5c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-m85tn\" (UID: \"baaced9e-4d77-491b-8898-028c9925a5c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307405 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/baaced9e-4d77-491b-8898-028c9925a5c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m85tn\" (UID: \"baaced9e-4d77-491b-8898-028c9925a5c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307423 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307454 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/395759b1-2c0e-4592-9b92-afb458e31327-etcd-client\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307474 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-etcd-serving-ca\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307497 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-registry-tls\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307516 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b38f53-df50-4c41-bb9a-c4922ce023b2-serving-cert\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307535 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/395759b1-2c0e-4592-9b92-afb458e31327-node-pullsecrets\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307557 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307617 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-audit\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307643 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307664 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307689 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpgx7\" (UniqueName: \"kubernetes.io/projected/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-kube-api-access-wpgx7\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307705 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9941fb67-6521-471d-8034-3cb2f695ee40-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307725 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307748 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-images\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307764 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk576\" (UniqueName: \"kubernetes.io/projected/3a9b8d7c-e836-4661-856d-5a0e8276387e-kube-api-access-jk576\") pod \"dns-operator-744455d44c-g8rbm\" (UID: \"3a9b8d7c-e836-4661-856d-5a0e8276387e\") " pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307816 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa80055-6c27-434c-b6b3-166af5828101-serving-cert\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307836 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59zpb\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-kube-api-access-59zpb\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307857 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307874 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fttgw\" (UniqueName: \"kubernetes.io/projected/620006cf-5c3f-457c-a416-30384cf951ec-kube-api-access-fttgw\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.307891 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f35df32f-6245-445e-95c6-c419d45ab949-serving-cert\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308015 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/baaced9e-4d77-491b-8898-028c9925a5c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m85tn\" (UID: \"baaced9e-4d77-491b-8898-028c9925a5c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308030 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fb731558-acf5-4738-b505-c7ab65dbc2cf-machine-approver-tls\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308053 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-audit-policies\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308068 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-encryption-config\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308085 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhdgl\" (UniqueName: \"kubernetes.io/projected/fb731558-acf5-4738-b505-c7ab65dbc2cf-kube-api-access-rhdgl\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308070 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308103 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308121 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzqz\" (UniqueName: \"kubernetes.io/projected/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-kube-api-access-6fzqz\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308145 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308199 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzqbt\" (UniqueName: \"kubernetes.io/projected/f5a9727e-9d16-4c1c-9279-ab4bb06fd41d-kube-api-access-bzqbt\") pod \"openshift-apiserver-operator-796bbdcf4f-vntvs\" (UID: \"f5a9727e-9d16-4c1c-9279-ab4bb06fd41d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308223 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a703a3d0-8ee9-4319-b2e0-0e0292eb8d98-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bn8jb\" (UID: \"a703a3d0-8ee9-4319-b2e0-0e0292eb8d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308242 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308261 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308277 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308292 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-audit-dir\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308306 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-config\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308322 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-image-import-ca\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308340 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-registry-certificates\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308359 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f35df32f-6245-445e-95c6-c419d45ab949-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308382 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-config\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308400 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308420 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12b38f53-df50-4c41-bb9a-c4922ce023b2-trusted-ca\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308464 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/395759b1-2c0e-4592-9b92-afb458e31327-serving-cert\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308482 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/395759b1-2c0e-4592-9b92-afb458e31327-audit-dir\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308506 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62zbx\" (UniqueName: \"kubernetes.io/projected/395759b1-2c0e-4592-9b92-afb458e31327-kube-api-access-62zbx\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308538 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b38f53-df50-4c41-bb9a-c4922ce023b2-config\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308554 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308596 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9941fb67-6521-471d-8034-3cb2f695ee40-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308623 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvcn\" (UniqueName: \"kubernetes.io/projected/baaced9e-4d77-491b-8898-028c9925a5c2-kube-api-access-fhvcn\") pod \"openshift-config-operator-7777fb866f-m85tn\" (UID: \"baaced9e-4d77-491b-8898-028c9925a5c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308643 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb731558-acf5-4738-b505-c7ab65dbc2cf-config\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308661 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grsgl\" (UniqueName: \"kubernetes.io/projected/a703a3d0-8ee9-4319-b2e0-0e0292eb8d98-kube-api-access-grsgl\") pod \"cluster-samples-operator-665b6dd947-bn8jb\" (UID: \"a703a3d0-8ee9-4319-b2e0-0e0292eb8d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308679 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5a9727e-9d16-4c1c-9279-ab4bb06fd41d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vntvs\" (UID: \"f5a9727e-9d16-4c1c-9279-ab4bb06fd41d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308698 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5a9727e-9d16-4c1c-9279-ab4bb06fd41d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vntvs\" (UID: \"f5a9727e-9d16-4c1c-9279-ab4bb06fd41d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308718 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-client-ca\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308735 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-trusted-ca\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308753 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/395759b1-2c0e-4592-9b92-afb458e31327-encryption-config\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308773 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a9b8d7c-e836-4661-856d-5a0e8276387e-metrics-tls\") pod \"dns-operator-744455d44c-g8rbm\" (UID: \"3a9b8d7c-e836-4661-856d-5a0e8276387e\") " pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308780 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308795 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f35df32f-6245-445e-95c6-c419d45ab949-service-ca-bundle\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308822 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f35df32f-6245-445e-95c6-c419d45ab949-config\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308842 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/620006cf-5c3f-457c-a416-30384cf951ec-audit-dir\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308859 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-config\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308875 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7qmn\" (UniqueName: \"kubernetes.io/projected/f35df32f-6245-445e-95c6-c419d45ab949-kube-api-access-w7qmn\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308892 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308909 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-audit-policies\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308924 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-etcd-client\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308940 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308956 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb731558-acf5-4738-b505-c7ab65dbc2cf-auth-proxy-config\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308971 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpjbp\" (UniqueName: \"kubernetes.io/projected/4fa80055-6c27-434c-b6b3-166af5828101-kube-api-access-bpjbp\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.308989 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-bound-sa-token\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.309087 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ff5xs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.309357 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-images\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.309382 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f35df32f-6245-445e-95c6-c419d45ab949-service-ca-bundle\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: E0120 03:51:48.310252 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:48.810237674 +0000 UTC m=+155.410025533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.310352 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f35df32f-6245-445e-95c6-c419d45ab949-config\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.310397 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/620006cf-5c3f-457c-a416-30384cf951ec-audit-dir\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.310418 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-etcd-serving-ca\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.311129 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.313785 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-config\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.314000 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-audit-policies\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.314151 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/395759b1-2c0e-4592-9b92-afb458e31327-node-pullsecrets\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.314551 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.314583 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.314956 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-audit-policies\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.314940 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/395759b1-2c0e-4592-9b92-afb458e31327-audit-dir\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.317129 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.317312 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.317546 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f35df32f-6245-445e-95c6-c419d45ab949-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.318069 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.318240 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-config\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.318278 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb731558-acf5-4738-b505-c7ab65dbc2cf-auth-proxy-config\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.318405 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.318838 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.318920 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-serving-cert\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.319116 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fb731558-acf5-4738-b505-c7ab65dbc2cf-machine-approver-tls\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.319159 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.319181 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hlwdq"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.319216 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-audit-dir\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.319498 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b38f53-df50-4c41-bb9a-c4922ce023b2-config\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.319634 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12b38f53-df50-4c41-bb9a-c4922ce023b2-trusted-ca\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.319752 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa80055-6c27-434c-b6b3-166af5828101-serving-cert\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.320168 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-audit\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.320558 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-config\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.320571 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.320653 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/395759b1-2c0e-4592-9b92-afb458e31327-etcd-client\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.320778 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb731558-acf5-4738-b505-c7ab65dbc2cf-config\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.320959 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-client-ca\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.321266 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5a9727e-9d16-4c1c-9279-ab4bb06fd41d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vntvs\" (UID: \"f5a9727e-9d16-4c1c-9279-ab4bb06fd41d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.321269 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b38f53-df50-4c41-bb9a-c4922ce023b2-serving-cert\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.321655 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.322104 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/395759b1-2c0e-4592-9b92-afb458e31327-image-import-ca\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.323198 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f35df32f-6245-445e-95c6-c419d45ab949-serving-cert\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.323682 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.324014 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.324704 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/395759b1-2c0e-4592-9b92-afb458e31327-serving-cert\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.324994 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-g8rbm"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.325069 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5a9727e-9d16-4c1c-9279-ab4bb06fd41d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vntvs\" (UID: \"f5a9727e-9d16-4c1c-9279-ab4bb06fd41d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.325210 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.325566 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.325761 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.327556 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-encryption-config\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.328073 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.328179 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kfxts"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.328313 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/395759b1-2c0e-4592-9b92-afb458e31327-encryption-config\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.328541 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.329479 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vknkf"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.329857 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a9b8d7c-e836-4661-856d-5a0e8276387e-metrics-tls\") pod \"dns-operator-744455d44c-g8rbm\" (UID: \"3a9b8d7c-e836-4661-856d-5a0e8276387e\") " pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.330724 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-etcd-client\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.330996 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.331070 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baaced9e-4d77-491b-8898-028c9925a5c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-m85tn\" (UID: \"baaced9e-4d77-491b-8898-028c9925a5c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.331169 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a703a3d0-8ee9-4319-b2e0-0e0292eb8d98-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bn8jb\" (UID: \"a703a3d0-8ee9-4319-b2e0-0e0292eb8d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.332064 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ctlvr"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.333310 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.337347 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.337392 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4xpjs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.338296 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.338881 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.339779 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.340681 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-pq7b2"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.341387 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hv5vs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.341488 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.342944 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.352310 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bpvpw"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.355044 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.357419 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bsjcr"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.360862 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.362602 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2xvbf"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.363868 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.363966 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-z2lzs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.364732 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z2lzs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.365494 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.366656 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.368184 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.369748 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4xpjs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.370959 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.372151 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.374321 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2xvbf"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.375598 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z2lzs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.376612 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs"] Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.380239 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.399662 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.409676 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:48 crc kubenswrapper[4898]: E0120 03:51:48.409825 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:48.909803156 +0000 UTC m=+155.509591015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.409875 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df739f36-70f7-4dd4-a86b-4aa6e65a3465-proxy-tls\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.409934 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9941fb67-6521-471d-8034-3cb2f695ee40-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.409961 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c15d357-55d5-4906-938d-4d47f3965b3b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4wb66\" (UID: \"4c15d357-55d5-4906-938d-4d47f3965b3b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.409982 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a73f73a2-1335-45a7-867b-18585f1c0862-config-volume\") pod \"collect-profiles-29481345-v9njs\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410000 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/766f27f8-ddbf-4cf7-909a-424958a89fe2-proxy-tls\") pod \"machine-config-controller-84d6567774-b2j6r\" (UID: \"766f27f8-ddbf-4cf7-909a-424958a89fe2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410020 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvs8s\" (UniqueName: \"kubernetes.io/projected/c3ae44e5-109c-4893-968a-84304c3edcfb-kube-api-access-cvs8s\") pod \"service-ca-operator-777779d784-lh4fv\" (UID: \"c3ae44e5-109c-4893-968a-84304c3edcfb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410038 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd9a227b-a085-42ce-b4b7-05fcfd678215-metrics-certs\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410058 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbbae7cb-6e5f-4122-9a1c-f6117ed70def-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-57gxc\" (UID: \"dbbae7cb-6e5f-4122-9a1c-f6117ed70def\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410076 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-trusted-ca-bundle\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410091 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-config\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410109 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-cabundle\") pod \"service-ca-9c57cc56f-vknkf\" (UID: \"cdb22f54-0343-40c5-94d9-9a743e7b875c\") " pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410139 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a54c7052-c047-4ef5-a201-796f444ad467-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kskjk\" (UID: \"a54c7052-c047-4ef5-a201-796f444ad467\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410277 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw857\" (UniqueName: \"kubernetes.io/projected/1d90cae4-9acf-48f9-84ac-373717661814-kube-api-access-hw857\") pod \"machine-config-server-pq7b2\" (UID: \"1d90cae4-9acf-48f9-84ac-373717661814\") " pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410355 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8fzg\" (UniqueName: \"kubernetes.io/projected/480eb1b9-9ac2-4353-9216-751da9b33e4f-kube-api-access-t8fzg\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410381 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9941fb67-6521-471d-8034-3cb2f695ee40-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410383 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8j42\" (UniqueName: \"kubernetes.io/projected/ccd714e4-5975-4306-bf59-a1542a08367b-kube-api-access-q8j42\") pod \"downloads-7954f5f757-tbzbw\" (UID: \"ccd714e4-5975-4306-bf59-a1542a08367b\") " pod="openshift-console/downloads-7954f5f757-tbzbw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410460 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rglv\" (UniqueName: \"kubernetes.io/projected/810f3b12-b157-4a65-becc-0490f489bcd9-kube-api-access-7rglv\") pod \"catalog-operator-68c6474976-hp6hs\" (UID: \"810f3b12-b157-4a65-becc-0490f489bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410524 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s56mg\" (UniqueName: \"kubernetes.io/projected/7b2b0787-24b7-42e6-b0a6-86eaa18560a8-kube-api-access-s56mg\") pod \"migrator-59844c95c7-8sdnz\" (UID: \"7b2b0787-24b7-42e6-b0a6-86eaa18560a8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410577 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c2a3f52-4642-4c41-8dad-ac50db0c6763-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410606 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4vnl\" (UniqueName: \"kubernetes.io/projected/a71ba0b6-92d4-4756-b286-f93ce475a236-kube-api-access-j4vnl\") pod \"kube-storage-version-migrator-operator-b67b599dd-qbgkv\" (UID: \"a71ba0b6-92d4-4756-b286-f93ce475a236\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410644 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-registry-certificates\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410675 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df739f36-70f7-4dd4-a86b-4aa6e65a3465-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410705 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-certs\") pod \"machine-config-server-pq7b2\" (UID: \"1d90cae4-9acf-48f9-84ac-373717661814\") " pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410808 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/df739f36-70f7-4dd4-a86b-4aa6e65a3465-images\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410849 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/be3f72ea-b769-4522-8cf3-f4e326329cf7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410877 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-mountpoint-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410902 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9941fb67-6521-471d-8034-3cb2f695ee40-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.410926 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-socket-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411003 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3ae44e5-109c-4893-968a-84304c3edcfb-config\") pod \"service-ca-operator-777779d784-lh4fv\" (UID: \"c3ae44e5-109c-4893-968a-84304c3edcfb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411046 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-service-ca\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411067 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7cb386f8-d968-4790-b003-48452b55487c-etcd-ca\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411128 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5893dc-b521-419f-afc7-07dd1aaac395-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmbwr\" (UID: \"7d5893dc-b521-419f-afc7-07dd1aaac395\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411171 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7cb386f8-d968-4790-b003-48452b55487c-etcd-client\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411191 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ae44e5-109c-4893-968a-84304c3edcfb-serving-cert\") pod \"service-ca-operator-777779d784-lh4fv\" (UID: \"c3ae44e5-109c-4893-968a-84304c3edcfb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411214 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7ae319d-3396-4567-8cbf-d9d331d01be4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4mccz\" (UID: \"f7ae319d-3396-4567-8cbf-d9d331d01be4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411254 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjwm7\" (UniqueName: \"kubernetes.io/projected/4360d6c0-d5f1-49ae-917b-86560151e7ff-kube-api-access-bjwm7\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvkxk\" (UID: \"4360d6c0-d5f1-49ae-917b-86560151e7ff\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411294 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8xx8\" (UniqueName: \"kubernetes.io/projected/dbbae7cb-6e5f-4122-9a1c-f6117ed70def-kube-api-access-x8xx8\") pod \"openshift-controller-manager-operator-756b6f6bc6-57gxc\" (UID: \"dbbae7cb-6e5f-4122-9a1c-f6117ed70def\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411502 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c15d357-55d5-4906-938d-4d47f3965b3b-srv-cert\") pod \"olm-operator-6b444d44fb-4wb66\" (UID: \"4c15d357-55d5-4906-938d-4d47f3965b3b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411550 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-node-bootstrap-token\") pod \"machine-config-server-pq7b2\" (UID: \"1d90cae4-9acf-48f9-84ac-373717661814\") " pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411588 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00fedf08-d9d4-43f5-96ff-3f705c050a96-serving-cert\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411616 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a54c7052-c047-4ef5-a201-796f444ad467-config\") pod \"kube-apiserver-operator-766d6c64bb-kskjk\" (UID: \"a54c7052-c047-4ef5-a201-796f444ad467\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411668 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7fcb\" (UniqueName: \"kubernetes.io/projected/8a0b7e05-ef31-426e-989f-a6ad6c710150-kube-api-access-v7fcb\") pod \"marketplace-operator-79b997595-bsjcr\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411703 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be3f72ea-b769-4522-8cf3-f4e326329cf7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411746 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmjgt\" (UniqueName: \"kubernetes.io/projected/dd9a227b-a085-42ce-b4b7-05fcfd678215-kube-api-access-tmjgt\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411771 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71ba0b6-92d4-4756-b286-f93ce475a236-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qbgkv\" (UID: \"a71ba0b6-92d4-4756-b286-f93ce475a236\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411813 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-config\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411836 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/810f3b12-b157-4a65-becc-0490f489bcd9-srv-cert\") pod \"catalog-operator-68c6474976-hp6hs\" (UID: \"810f3b12-b157-4a65-becc-0490f489bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.411969 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-registry-tls\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412008 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-oauth-config\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412052 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbaa55a-3008-4dc1-bc39-460904964ec3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zwrwk\" (UID: \"9bbaa55a-3008-4dc1-bc39-460904964ec3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412213 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7ae319d-3396-4567-8cbf-d9d331d01be4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4mccz\" (UID: \"f7ae319d-3396-4567-8cbf-d9d331d01be4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412309 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9xft\" (UniqueName: \"kubernetes.io/projected/be3f72ea-b769-4522-8cf3-f4e326329cf7-kube-api-access-p9xft\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412500 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dd9a227b-a085-42ce-b4b7-05fcfd678215-default-certificate\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412538 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-registration-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412567 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmp7h\" (UniqueName: \"kubernetes.io/projected/00fedf08-d9d4-43f5-96ff-3f705c050a96-kube-api-access-fmp7h\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412595 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbbae7cb-6e5f-4122-9a1c-f6117ed70def-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-57gxc\" (UID: \"dbbae7cb-6e5f-4122-9a1c-f6117ed70def\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412618 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx725\" (UniqueName: \"kubernetes.io/projected/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-kube-api-access-dx725\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412639 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a73f73a2-1335-45a7-867b-18585f1c0862-secret-volume\") pod \"collect-profiles-29481345-v9njs\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412664 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skd52\" (UniqueName: \"kubernetes.io/projected/a4852cac-3462-451a-b007-d9598c7acb67-kube-api-access-skd52\") pod \"multus-admission-controller-857f4d67dd-hv5vs\" (UID: \"a4852cac-3462-451a-b007-d9598c7acb67\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412712 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/766f27f8-ddbf-4cf7-909a-424958a89fe2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-b2j6r\" (UID: \"766f27f8-ddbf-4cf7-909a-424958a89fe2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412845 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59zpb\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-kube-api-access-59zpb\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.412930 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46ccx\" (UniqueName: \"kubernetes.io/projected/cdb22f54-0343-40c5-94d9-9a743e7b875c-kube-api-access-46ccx\") pod \"service-ca-9c57cc56f-vknkf\" (UID: \"cdb22f54-0343-40c5-94d9-9a743e7b875c\") " pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.413063 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.413102 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-tmpfs\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.413130 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsvx5\" (UniqueName: \"kubernetes.io/projected/7cb386f8-d968-4790-b003-48452b55487c-kube-api-access-rsvx5\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.413150 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-registry-certificates\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.413167 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cb386f8-d968-4790-b003-48452b55487c-serving-cert\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.413380 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a71ba0b6-92d4-4756-b286-f93ce475a236-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qbgkv\" (UID: \"a71ba0b6-92d4-4756-b286-f93ce475a236\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" Jan 20 03:51:48 crc kubenswrapper[4898]: E0120 03:51:48.413914 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:48.913903063 +0000 UTC m=+155.513691152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.414107 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bsjcr\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.414153 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnwnq\" (UniqueName: \"kubernetes.io/projected/df739f36-70f7-4dd4-a86b-4aa6e65a3465-kube-api-access-hnwnq\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.414277 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-webhook-cert\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.414358 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-plugins-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.414740 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cb386f8-d968-4790-b003-48452b55487c-config\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.414781 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ae319d-3396-4567-8cbf-d9d331d01be4-config\") pod \"kube-controller-manager-operator-78b949d7b-4mccz\" (UID: \"f7ae319d-3396-4567-8cbf-d9d331d01be4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.414816 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4360d6c0-d5f1-49ae-917b-86560151e7ff-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvkxk\" (UID: \"4360d6c0-d5f1-49ae-917b-86560151e7ff\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.414857 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a4852cac-3462-451a-b007-d9598c7acb67-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hv5vs\" (UID: \"a4852cac-3462-451a-b007-d9598c7acb67\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.414907 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-oauth-serving-cert\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415037 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1c2a3f52-4642-4c41-8dad-ac50db0c6763-metrics-tls\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415139 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn7mq\" (UniqueName: \"kubernetes.io/projected/1c2a3f52-4642-4c41-8dad-ac50db0c6763-kube-api-access-gn7mq\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415238 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cb386f8-d968-4790-b003-48452b55487c-etcd-service-ca\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415315 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-trusted-ca\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415398 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh9kg\" (UniqueName: \"kubernetes.io/projected/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-kube-api-access-dh9kg\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415467 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c2a3f52-4642-4c41-8dad-ac50db0c6763-trusted-ca\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415501 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5893dc-b521-419f-afc7-07dd1aaac395-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmbwr\" (UID: \"7d5893dc-b521-419f-afc7-07dd1aaac395\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415576 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a54c7052-c047-4ef5-a201-796f444ad467-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kskjk\" (UID: \"a54c7052-c047-4ef5-a201-796f444ad467\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415720 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-bound-sa-token\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415759 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dd9a227b-a085-42ce-b4b7-05fcfd678215-stats-auth\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415786 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-apiservice-cert\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415818 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5893dc-b521-419f-afc7-07dd1aaac395-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmbwr\" (UID: \"7d5893dc-b521-419f-afc7-07dd1aaac395\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.415848 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be3f72ea-b769-4522-8cf3-f4e326329cf7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416042 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs4r6\" (UniqueName: \"kubernetes.io/projected/766f27f8-ddbf-4cf7-909a-424958a89fe2-kube-api-access-gs4r6\") pod \"machine-config-controller-84d6567774-b2j6r\" (UID: \"766f27f8-ddbf-4cf7-909a-424958a89fe2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416236 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-client-ca\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416270 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pjqr\" (UniqueName: \"kubernetes.io/projected/a73f73a2-1335-45a7-867b-18585f1c0862-kube-api-access-6pjqr\") pod \"collect-profiles-29481345-v9njs\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416318 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9a227b-a085-42ce-b4b7-05fcfd678215-service-ca-bundle\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416342 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-serving-cert\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416543 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9941fb67-6521-471d-8034-3cb2f695ee40-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416566 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-key\") pod \"service-ca-9c57cc56f-vknkf\" (UID: \"cdb22f54-0343-40c5-94d9-9a743e7b875c\") " pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416622 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/810f3b12-b157-4a65-becc-0490f489bcd9-profile-collector-cert\") pod \"catalog-operator-68c6474976-hp6hs\" (UID: \"810f3b12-b157-4a65-becc-0490f489bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416656 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bsjcr\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416673 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zpcs\" (UniqueName: \"kubernetes.io/projected/4c15d357-55d5-4906-938d-4d47f3965b3b-kube-api-access-8zpcs\") pod \"olm-operator-6b444d44fb-4wb66\" (UID: \"4c15d357-55d5-4906-938d-4d47f3965b3b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416719 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-csi-data-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.416740 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5nnv\" (UniqueName: \"kubernetes.io/projected/9bbaa55a-3008-4dc1-bc39-460904964ec3-kube-api-access-p5nnv\") pod \"package-server-manager-789f6589d5-zwrwk\" (UID: \"9bbaa55a-3008-4dc1-bc39-460904964ec3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.417190 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-trusted-ca\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.419690 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-registry-tls\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.419859 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.440052 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.459831 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.479677 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.500506 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517204 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517467 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/df739f36-70f7-4dd4-a86b-4aa6e65a3465-images\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517495 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/be3f72ea-b769-4522-8cf3-f4e326329cf7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517514 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-mountpoint-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517532 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-socket-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517559 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3ae44e5-109c-4893-968a-84304c3edcfb-config\") pod \"service-ca-operator-777779d784-lh4fv\" (UID: \"c3ae44e5-109c-4893-968a-84304c3edcfb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517575 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-service-ca\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517591 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5893dc-b521-419f-afc7-07dd1aaac395-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmbwr\" (UID: \"7d5893dc-b521-419f-afc7-07dd1aaac395\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517606 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7cb386f8-d968-4790-b003-48452b55487c-etcd-ca\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517621 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ae44e5-109c-4893-968a-84304c3edcfb-serving-cert\") pod \"service-ca-operator-777779d784-lh4fv\" (UID: \"c3ae44e5-109c-4893-968a-84304c3edcfb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517640 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7cb386f8-d968-4790-b003-48452b55487c-etcd-client\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517655 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7ae319d-3396-4567-8cbf-d9d331d01be4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4mccz\" (UID: \"f7ae319d-3396-4567-8cbf-d9d331d01be4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517682 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjwm7\" (UniqueName: \"kubernetes.io/projected/4360d6c0-d5f1-49ae-917b-86560151e7ff-kube-api-access-bjwm7\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvkxk\" (UID: \"4360d6c0-d5f1-49ae-917b-86560151e7ff\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517699 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8xx8\" (UniqueName: \"kubernetes.io/projected/dbbae7cb-6e5f-4122-9a1c-f6117ed70def-kube-api-access-x8xx8\") pod \"openshift-controller-manager-operator-756b6f6bc6-57gxc\" (UID: \"dbbae7cb-6e5f-4122-9a1c-f6117ed70def\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517717 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c15d357-55d5-4906-938d-4d47f3965b3b-srv-cert\") pod \"olm-operator-6b444d44fb-4wb66\" (UID: \"4c15d357-55d5-4906-938d-4d47f3965b3b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517734 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-node-bootstrap-token\") pod \"machine-config-server-pq7b2\" (UID: \"1d90cae4-9acf-48f9-84ac-373717661814\") " pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517756 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00fedf08-d9d4-43f5-96ff-3f705c050a96-serving-cert\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517777 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a54c7052-c047-4ef5-a201-796f444ad467-config\") pod \"kube-apiserver-operator-766d6c64bb-kskjk\" (UID: \"a54c7052-c047-4ef5-a201-796f444ad467\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517795 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7fcb\" (UniqueName: \"kubernetes.io/projected/8a0b7e05-ef31-426e-989f-a6ad6c710150-kube-api-access-v7fcb\") pod \"marketplace-operator-79b997595-bsjcr\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517815 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be3f72ea-b769-4522-8cf3-f4e326329cf7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517830 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/810f3b12-b157-4a65-becc-0490f489bcd9-srv-cert\") pod \"catalog-operator-68c6474976-hp6hs\" (UID: \"810f3b12-b157-4a65-becc-0490f489bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517850 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmjgt\" (UniqueName: \"kubernetes.io/projected/dd9a227b-a085-42ce-b4b7-05fcfd678215-kube-api-access-tmjgt\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517868 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71ba0b6-92d4-4756-b286-f93ce475a236-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qbgkv\" (UID: \"a71ba0b6-92d4-4756-b286-f93ce475a236\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517886 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-config\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517902 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbaa55a-3008-4dc1-bc39-460904964ec3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zwrwk\" (UID: \"9bbaa55a-3008-4dc1-bc39-460904964ec3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517920 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-oauth-config\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517940 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7ae319d-3396-4567-8cbf-d9d331d01be4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4mccz\" (UID: \"f7ae319d-3396-4567-8cbf-d9d331d01be4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517956 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9xft\" (UniqueName: \"kubernetes.io/projected/be3f72ea-b769-4522-8cf3-f4e326329cf7-kube-api-access-p9xft\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517974 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmp7h\" (UniqueName: \"kubernetes.io/projected/00fedf08-d9d4-43f5-96ff-3f705c050a96-kube-api-access-fmp7h\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.517992 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dd9a227b-a085-42ce-b4b7-05fcfd678215-default-certificate\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.518009 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-registration-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.518024 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbbae7cb-6e5f-4122-9a1c-f6117ed70def-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-57gxc\" (UID: \"dbbae7cb-6e5f-4122-9a1c-f6117ed70def\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.518042 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx725\" (UniqueName: \"kubernetes.io/projected/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-kube-api-access-dx725\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.518059 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a73f73a2-1335-45a7-867b-18585f1c0862-secret-volume\") pod \"collect-profiles-29481345-v9njs\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.518534 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skd52\" (UniqueName: \"kubernetes.io/projected/a4852cac-3462-451a-b007-d9598c7acb67-kube-api-access-skd52\") pod \"multus-admission-controller-857f4d67dd-hv5vs\" (UID: \"a4852cac-3462-451a-b007-d9598c7acb67\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.518642 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/766f27f8-ddbf-4cf7-909a-424958a89fe2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-b2j6r\" (UID: \"766f27f8-ddbf-4cf7-909a-424958a89fe2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.518749 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46ccx\" (UniqueName: \"kubernetes.io/projected/cdb22f54-0343-40c5-94d9-9a743e7b875c-kube-api-access-46ccx\") pod \"service-ca-9c57cc56f-vknkf\" (UID: \"cdb22f54-0343-40c5-94d9-9a743e7b875c\") " pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.518835 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-tmpfs\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.518869 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-config\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.518899 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsvx5\" (UniqueName: \"kubernetes.io/projected/7cb386f8-d968-4790-b003-48452b55487c-kube-api-access-rsvx5\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.518969 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cb386f8-d968-4790-b003-48452b55487c-serving-cert\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519041 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a71ba0b6-92d4-4756-b286-f93ce475a236-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qbgkv\" (UID: \"a71ba0b6-92d4-4756-b286-f93ce475a236\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519157 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bsjcr\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519210 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnwnq\" (UniqueName: \"kubernetes.io/projected/df739f36-70f7-4dd4-a86b-4aa6e65a3465-kube-api-access-hnwnq\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519263 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-webhook-cert\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519313 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-plugins-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519386 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ae319d-3396-4567-8cbf-d9d331d01be4-config\") pod \"kube-controller-manager-operator-78b949d7b-4mccz\" (UID: \"f7ae319d-3396-4567-8cbf-d9d331d01be4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519493 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cb386f8-d968-4790-b003-48452b55487c-config\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519561 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4360d6c0-d5f1-49ae-917b-86560151e7ff-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvkxk\" (UID: \"4360d6c0-d5f1-49ae-917b-86560151e7ff\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519626 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a4852cac-3462-451a-b007-d9598c7acb67-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hv5vs\" (UID: \"a4852cac-3462-451a-b007-d9598c7acb67\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519678 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-oauth-serving-cert\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519733 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1c2a3f52-4642-4c41-8dad-ac50db0c6763-metrics-tls\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:48 crc kubenswrapper[4898]: E0120 03:51:48.519757 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.019726098 +0000 UTC m=+155.619514157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519819 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn7mq\" (UniqueName: \"kubernetes.io/projected/1c2a3f52-4642-4c41-8dad-ac50db0c6763-kube-api-access-gn7mq\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519890 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cb386f8-d968-4790-b003-48452b55487c-etcd-service-ca\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519927 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5893dc-b521-419f-afc7-07dd1aaac395-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmbwr\" (UID: \"7d5893dc-b521-419f-afc7-07dd1aaac395\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519946 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh9kg\" (UniqueName: \"kubernetes.io/projected/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-kube-api-access-dh9kg\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519963 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c2a3f52-4642-4c41-8dad-ac50db0c6763-trusted-ca\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.519984 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a54c7052-c047-4ef5-a201-796f444ad467-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kskjk\" (UID: \"a54c7052-c047-4ef5-a201-796f444ad467\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520020 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5893dc-b521-419f-afc7-07dd1aaac395-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmbwr\" (UID: \"7d5893dc-b521-419f-afc7-07dd1aaac395\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520047 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dd9a227b-a085-42ce-b4b7-05fcfd678215-stats-auth\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520062 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-apiservice-cert\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520080 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be3f72ea-b769-4522-8cf3-f4e326329cf7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520099 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs4r6\" (UniqueName: \"kubernetes.io/projected/766f27f8-ddbf-4cf7-909a-424958a89fe2-kube-api-access-gs4r6\") pod \"machine-config-controller-84d6567774-b2j6r\" (UID: \"766f27f8-ddbf-4cf7-909a-424958a89fe2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520117 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pjqr\" (UniqueName: \"kubernetes.io/projected/a73f73a2-1335-45a7-867b-18585f1c0862-kube-api-access-6pjqr\") pod \"collect-profiles-29481345-v9njs\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520135 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-client-ca\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520154 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9a227b-a085-42ce-b4b7-05fcfd678215-service-ca-bundle\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520170 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-serving-cert\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520191 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-key\") pod \"service-ca-9c57cc56f-vknkf\" (UID: \"cdb22f54-0343-40c5-94d9-9a743e7b875c\") " pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520206 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bsjcr\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520226 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zpcs\" (UniqueName: \"kubernetes.io/projected/4c15d357-55d5-4906-938d-4d47f3965b3b-kube-api-access-8zpcs\") pod \"olm-operator-6b444d44fb-4wb66\" (UID: \"4c15d357-55d5-4906-938d-4d47f3965b3b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520245 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/810f3b12-b157-4a65-becc-0490f489bcd9-profile-collector-cert\") pod \"catalog-operator-68c6474976-hp6hs\" (UID: \"810f3b12-b157-4a65-becc-0490f489bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520267 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-csi-data-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520284 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5nnv\" (UniqueName: \"kubernetes.io/projected/9bbaa55a-3008-4dc1-bc39-460904964ec3-kube-api-access-p5nnv\") pod \"package-server-manager-789f6589d5-zwrwk\" (UID: \"9bbaa55a-3008-4dc1-bc39-460904964ec3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520304 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df739f36-70f7-4dd4-a86b-4aa6e65a3465-proxy-tls\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520319 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c15d357-55d5-4906-938d-4d47f3965b3b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4wb66\" (UID: \"4c15d357-55d5-4906-938d-4d47f3965b3b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520335 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a73f73a2-1335-45a7-867b-18585f1c0862-config-volume\") pod \"collect-profiles-29481345-v9njs\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520362 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/766f27f8-ddbf-4cf7-909a-424958a89fe2-proxy-tls\") pod \"machine-config-controller-84d6567774-b2j6r\" (UID: \"766f27f8-ddbf-4cf7-909a-424958a89fe2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520379 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvs8s\" (UniqueName: \"kubernetes.io/projected/c3ae44e5-109c-4893-968a-84304c3edcfb-kube-api-access-cvs8s\") pod \"service-ca-operator-777779d784-lh4fv\" (UID: \"c3ae44e5-109c-4893-968a-84304c3edcfb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520395 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-config\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520411 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-cabundle\") pod \"service-ca-9c57cc56f-vknkf\" (UID: \"cdb22f54-0343-40c5-94d9-9a743e7b875c\") " pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520442 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd9a227b-a085-42ce-b4b7-05fcfd678215-metrics-certs\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520462 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbbae7cb-6e5f-4122-9a1c-f6117ed70def-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-57gxc\" (UID: \"dbbae7cb-6e5f-4122-9a1c-f6117ed70def\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520477 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-trusted-ca-bundle\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520496 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a54c7052-c047-4ef5-a201-796f444ad467-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kskjk\" (UID: \"a54c7052-c047-4ef5-a201-796f444ad467\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520515 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw857\" (UniqueName: \"kubernetes.io/projected/1d90cae4-9acf-48f9-84ac-373717661814-kube-api-access-hw857\") pod \"machine-config-server-pq7b2\" (UID: \"1d90cae4-9acf-48f9-84ac-373717661814\") " pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520539 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8fzg\" (UniqueName: \"kubernetes.io/projected/480eb1b9-9ac2-4353-9216-751da9b33e4f-kube-api-access-t8fzg\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520559 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8j42\" (UniqueName: \"kubernetes.io/projected/ccd714e4-5975-4306-bf59-a1542a08367b-kube-api-access-q8j42\") pod \"downloads-7954f5f757-tbzbw\" (UID: \"ccd714e4-5975-4306-bf59-a1542a08367b\") " pod="openshift-console/downloads-7954f5f757-tbzbw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520577 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rglv\" (UniqueName: \"kubernetes.io/projected/810f3b12-b157-4a65-becc-0490f489bcd9-kube-api-access-7rglv\") pod \"catalog-operator-68c6474976-hp6hs\" (UID: \"810f3b12-b157-4a65-becc-0490f489bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520595 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s56mg\" (UniqueName: \"kubernetes.io/projected/7b2b0787-24b7-42e6-b0a6-86eaa18560a8-kube-api-access-s56mg\") pod \"migrator-59844c95c7-8sdnz\" (UID: \"7b2b0787-24b7-42e6-b0a6-86eaa18560a8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520616 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c2a3f52-4642-4c41-8dad-ac50db0c6763-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520639 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4vnl\" (UniqueName: \"kubernetes.io/projected/a71ba0b6-92d4-4756-b286-f93ce475a236-kube-api-access-j4vnl\") pod \"kube-storage-version-migrator-operator-b67b599dd-qbgkv\" (UID: \"a71ba0b6-92d4-4756-b286-f93ce475a236\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520659 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-certs\") pod \"machine-config-server-pq7b2\" (UID: \"1d90cae4-9acf-48f9-84ac-373717661814\") " pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.520680 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df739f36-70f7-4dd4-a86b-4aa6e65a3465-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.521350 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a54c7052-c047-4ef5-a201-796f444ad467-config\") pod \"kube-apiserver-operator-766d6c64bb-kskjk\" (UID: \"a54c7052-c047-4ef5-a201-796f444ad467\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.521409 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df739f36-70f7-4dd4-a86b-4aa6e65a3465-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.521455 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-registration-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.521713 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.521965 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-mountpoint-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.522040 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-socket-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.522581 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbbae7cb-6e5f-4122-9a1c-f6117ed70def-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-57gxc\" (UID: \"dbbae7cb-6e5f-4122-9a1c-f6117ed70def\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.522659 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-service-ca\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.523878 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c2a3f52-4642-4c41-8dad-ac50db0c6763-trusted-ca\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.524008 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/766f27f8-ddbf-4cf7-909a-424958a89fe2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-b2j6r\" (UID: \"766f27f8-ddbf-4cf7-909a-424958a89fe2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.524461 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1c2a3f52-4642-4c41-8dad-ac50db0c6763-metrics-tls\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.524761 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5893dc-b521-419f-afc7-07dd1aaac395-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmbwr\" (UID: \"7d5893dc-b521-419f-afc7-07dd1aaac395\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.525283 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-csi-data-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.526132 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-tmpfs\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.526481 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/be3f72ea-b769-4522-8cf3-f4e326329cf7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.526967 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-plugins-dir\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.527675 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-trusted-ca-bundle\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.527840 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be3f72ea-b769-4522-8cf3-f4e326329cf7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.528696 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-oauth-serving-cert\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.529411 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbbae7cb-6e5f-4122-9a1c-f6117ed70def-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-57gxc\" (UID: \"dbbae7cb-6e5f-4122-9a1c-f6117ed70def\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.529811 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-serving-cert\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.530507 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a54c7052-c047-4ef5-a201-796f444ad467-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kskjk\" (UID: \"a54c7052-c047-4ef5-a201-796f444ad467\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.532822 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-oauth-config\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.540134 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.560163 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.566554 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5893dc-b521-419f-afc7-07dd1aaac395-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmbwr\" (UID: \"7d5893dc-b521-419f-afc7-07dd1aaac395\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.581415 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.599241 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.610727 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4360d6c0-d5f1-49ae-917b-86560151e7ff-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvkxk\" (UID: \"4360d6c0-d5f1-49ae-917b-86560151e7ff\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.621801 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.622745 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: E0120 03:51:48.623142 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.123124639 +0000 UTC m=+155.722912498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.640449 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.660661 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.668546 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ae319d-3396-4567-8cbf-d9d331d01be4-config\") pod \"kube-controller-manager-operator-78b949d7b-4mccz\" (UID: \"f7ae319d-3396-4567-8cbf-d9d331d01be4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.681504 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.701051 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.709401 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cb386f8-d968-4790-b003-48452b55487c-serving-cert\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.720160 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.724070 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:48 crc kubenswrapper[4898]: E0120 03:51:48.724208 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.224193438 +0000 UTC m=+155.823981297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.724518 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: E0120 03:51:48.724840 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.224828446 +0000 UTC m=+155.824616515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.740060 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.745248 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7cb386f8-d968-4790-b003-48452b55487c-etcd-client\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.761134 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.763975 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7cb386f8-d968-4790-b003-48452b55487c-etcd-ca\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.780621 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.788493 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cb386f8-d968-4790-b003-48452b55487c-config\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.801532 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.820737 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.825582 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:48 crc kubenswrapper[4898]: E0120 03:51:48.825890 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.325859304 +0000 UTC m=+155.925647203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.827103 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: E0120 03:51:48.827537 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.327516816 +0000 UTC m=+155.927304705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.861025 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.873957 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7ae319d-3396-4567-8cbf-d9d331d01be4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4mccz\" (UID: \"f7ae319d-3396-4567-8cbf-d9d331d01be4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.880838 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.884491 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cb386f8-d968-4790-b003-48452b55487c-etcd-service-ca\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.900388 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.920841 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.928523 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:48 crc kubenswrapper[4898]: E0120 03:51:48.928698 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.428661527 +0000 UTC m=+156.028449426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.929722 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:48 crc kubenswrapper[4898]: E0120 03:51:48.930246 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.430223365 +0000 UTC m=+156.030011264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.933633 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dd9a227b-a085-42ce-b4b7-05fcfd678215-stats-auth\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.940959 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.949780 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd9a227b-a085-42ce-b4b7-05fcfd678215-metrics-certs\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.961166 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.980421 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 03:51:48 crc kubenswrapper[4898]: I0120 03:51:48.986277 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dd9a227b-a085-42ce-b4b7-05fcfd678215-default-certificate\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.000048 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.008698 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9a227b-a085-42ce-b4b7-05fcfd678215-service-ca-bundle\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.020568 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.032000 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.032172 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.532146421 +0000 UTC m=+156.131934310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.032806 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.033167 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.533149901 +0000 UTC m=+156.132937800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.040328 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.061183 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.080510 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.100777 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.109648 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-config\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.120386 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.133836 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.134205 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.634183129 +0000 UTC m=+156.233971018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.134849 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.135505 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.635477299 +0000 UTC m=+156.235265188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.141588 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.155347 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00fedf08-d9d4-43f5-96ff-3f705c050a96-serving-cert\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.160972 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.169349 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-client-ca\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.181189 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.201030 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.220224 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.232712 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/766f27f8-ddbf-4cf7-909a-424958a89fe2-proxy-tls\") pod \"machine-config-controller-84d6567774-b2j6r\" (UID: \"766f27f8-ddbf-4cf7-909a-424958a89fe2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.235809 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.236195 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.736119895 +0000 UTC m=+156.335907784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.237227 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.237887 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.737857988 +0000 UTC m=+156.337646067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.240998 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.265345 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.272417 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a4852cac-3462-451a-b007-d9598c7acb67-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hv5vs\" (UID: \"a4852cac-3462-451a-b007-d9598c7acb67\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.278643 4898 request.go:700] Waited for 1.014388144s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.280292 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.300336 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.312833 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-webhook-cert\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.312961 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-apiservice-cert\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.320481 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.339811 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.340040 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.840006261 +0000 UTC m=+156.439794170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.340340 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.341018 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.341496 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.841472265 +0000 UTC m=+156.441260244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.362471 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.381028 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.401769 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.413152 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a71ba0b6-92d4-4756-b286-f93ce475a236-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qbgkv\" (UID: \"a71ba0b6-92d4-4756-b286-f93ce475a236\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.420708 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.440502 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.443266 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.443558 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.943511045 +0000 UTC m=+156.543298934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.444189 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.444788 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:49.944769913 +0000 UTC m=+156.544557802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.450351 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71ba0b6-92d4-4756-b286-f93ce475a236-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qbgkv\" (UID: \"a71ba0b6-92d4-4756-b286-f93ce475a236\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.460556 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.481087 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.490226 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/810f3b12-b157-4a65-becc-0490f489bcd9-profile-collector-cert\") pod \"catalog-operator-68c6474976-hp6hs\" (UID: \"810f3b12-b157-4a65-becc-0490f489bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.494128 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a73f73a2-1335-45a7-867b-18585f1c0862-secret-volume\") pod \"collect-profiles-29481345-v9njs\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.495843 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c15d357-55d5-4906-938d-4d47f3965b3b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4wb66\" (UID: \"4c15d357-55d5-4906-938d-4d47f3965b3b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.501589 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.515260 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c15d357-55d5-4906-938d-4d47f3965b3b-srv-cert\") pod \"olm-operator-6b444d44fb-4wb66\" (UID: \"4c15d357-55d5-4906-938d-4d47f3965b3b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.518953 4898 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.519079 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-node-bootstrap-token podName:1d90cae4-9acf-48f9-84ac-373717661814 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.019045608 +0000 UTC m=+156.618833697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-node-bootstrap-token") pod "machine-config-server-pq7b2" (UID: "1d90cae4-9acf-48f9-84ac-373717661814") : failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.519215 4898 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.519336 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bbaa55a-3008-4dc1-bc39-460904964ec3-package-server-manager-serving-cert podName:9bbaa55a-3008-4dc1-bc39-460904964ec3 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.019306426 +0000 UTC m=+156.619094315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/9bbaa55a-3008-4dc1-bc39-460904964ec3-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-zwrwk" (UID: "9bbaa55a-3008-4dc1-bc39-460904964ec3") : failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.520128 4898 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.520253 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/810f3b12-b157-4a65-becc-0490f489bcd9-srv-cert podName:810f3b12-b157-4a65-becc-0490f489bcd9 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.020222154 +0000 UTC m=+156.620010043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/810f3b12-b157-4a65-becc-0490f489bcd9-srv-cert") pod "catalog-operator-68c6474976-hp6hs" (UID: "810f3b12-b157-4a65-becc-0490f489bcd9") : failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.521575 4898 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.521636 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.521654 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df739f36-70f7-4dd4-a86b-4aa6e65a3465-images podName:df739f36-70f7-4dd4-a86b-4aa6e65a3465 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.021636748 +0000 UTC m=+156.621424637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/df739f36-70f7-4dd4-a86b-4aa6e65a3465-images") pod "machine-config-operator-74547568cd-gwsf2" (UID: "df739f36-70f7-4dd4-a86b-4aa6e65a3465") : failed to sync configmap cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.522198 4898 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.522231 4898 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.522284 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-key podName:cdb22f54-0343-40c5-94d9-9a743e7b875c nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.022263037 +0000 UTC m=+156.622050936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-key") pod "service-ca-9c57cc56f-vknkf" (UID: "cdb22f54-0343-40c5-94d9-9a743e7b875c") : failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.522340 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3ae44e5-109c-4893-968a-84304c3edcfb-config podName:c3ae44e5-109c-4893-968a-84304c3edcfb nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.022301918 +0000 UTC m=+156.622090047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c3ae44e5-109c-4893-968a-84304c3edcfb-config") pod "service-ca-operator-777779d784-lh4fv" (UID: "c3ae44e5-109c-4893-968a-84304c3edcfb") : failed to sync configmap cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.524512 4898 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.524594 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-trusted-ca podName:8a0b7e05-ef31-426e-989f-a6ad6c710150 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.024574638 +0000 UTC m=+156.624362527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-trusted-ca") pod "marketplace-operator-79b997595-bsjcr" (UID: "8a0b7e05-ef31-426e-989f-a6ad6c710150") : failed to sync configmap cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.524655 4898 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.524715 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ae44e5-109c-4893-968a-84304c3edcfb-serving-cert podName:c3ae44e5-109c-4893-968a-84304c3edcfb nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.024696062 +0000 UTC m=+156.624484191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c3ae44e5-109c-4893-968a-84304c3edcfb-serving-cert") pod "service-ca-operator-777779d784-lh4fv" (UID: "c3ae44e5-109c-4893-968a-84304c3edcfb") : failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.525869 4898 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.525991 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-operator-metrics podName:8a0b7e05-ef31-426e-989f-a6ad6c710150 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.025969351 +0000 UTC m=+156.625757250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-operator-metrics") pod "marketplace-operator-79b997595-bsjcr" (UID: "8a0b7e05-ef31-426e-989f-a6ad6c710150") : failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.527866 4898 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.527896 4898 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.527950 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-cabundle podName:cdb22f54-0343-40c5-94d9-9a743e7b875c nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.027928052 +0000 UTC m=+156.627715951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-cabundle") pod "service-ca-9c57cc56f-vknkf" (UID: "cdb22f54-0343-40c5-94d9-9a743e7b875c") : failed to sync configmap cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.527975 4898 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.527980 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-certs podName:1d90cae4-9acf-48f9-84ac-373717661814 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.027964933 +0000 UTC m=+156.627752832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-certs") pod "machine-config-server-pq7b2" (UID: "1d90cae4-9acf-48f9-84ac-373717661814") : failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.528034 4898 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.528053 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df739f36-70f7-4dd4-a86b-4aa6e65a3465-proxy-tls podName:df739f36-70f7-4dd4-a86b-4aa6e65a3465 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.028032755 +0000 UTC m=+156.627820644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/df739f36-70f7-4dd4-a86b-4aa6e65a3465-proxy-tls") pod "machine-config-operator-74547568cd-gwsf2" (UID: "df739f36-70f7-4dd4-a86b-4aa6e65a3465") : failed to sync secret cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.528170 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a73f73a2-1335-45a7-867b-18585f1c0862-config-volume podName:a73f73a2-1335-45a7-867b-18585f1c0862 nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.028137308 +0000 UTC m=+156.627925407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a73f73a2-1335-45a7-867b-18585f1c0862-config-volume") pod "collect-profiles-29481345-v9njs" (UID: "a73f73a2-1335-45a7-867b-18585f1c0862") : failed to sync configmap cache: timed out waiting for the condition Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.541058 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.546359 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.546631 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.046590895 +0000 UTC m=+156.646378794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.550101 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.550195 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.050169596 +0000 UTC m=+156.649957485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.560928 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.580798 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.600768 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.620869 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.640734 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.652695 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.652939 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.152895375 +0000 UTC m=+156.752683294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.653947 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.654701 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.154664849 +0000 UTC m=+156.754452758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.661260 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.681732 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.716519 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.720474 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.740501 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.755983 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.756230 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.256195873 +0000 UTC m=+156.855983772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.756681 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.757170 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.257152582 +0000 UTC m=+156.856940471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.760657 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.781753 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.801194 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.821291 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.841144 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.858056 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.858578 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.358543812 +0000 UTC m=+156.958331711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.858916 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.859674 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.359644265 +0000 UTC m=+156.959432274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.860501 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.882337 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.901229 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.920902 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.940811 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.961904 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.962181 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.462143637 +0000 UTC m=+157.061931546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.963078 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:49 crc kubenswrapper[4898]: E0120 03:51:49.963644 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.463624303 +0000 UTC m=+157.063412202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:49 crc kubenswrapper[4898]: I0120 03:51:49.989249 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5d94\" (UniqueName: \"kubernetes.io/projected/12b38f53-df50-4c41-bb9a-c4922ce023b2-kube-api-access-n5d94\") pod \"console-operator-58897d9998-l9cpd\" (UID: \"12b38f53-df50-4c41-bb9a-c4922ce023b2\") " pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.008474 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzqz\" (UniqueName: \"kubernetes.io/projected/8c39e36d-f949-4ad3-ba7f-f4d2b9468a80-kube-api-access-6fzqz\") pod \"apiserver-7bbb656c7d-rk7bd\" (UID: \"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.020192 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk576\" (UniqueName: \"kubernetes.io/projected/3a9b8d7c-e836-4661-856d-5a0e8276387e-kube-api-access-jk576\") pod \"dns-operator-744455d44c-g8rbm\" (UID: \"3a9b8d7c-e836-4661-856d-5a0e8276387e\") " pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.047882 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fttgw\" (UniqueName: \"kubernetes.io/projected/620006cf-5c3f-457c-a416-30384cf951ec-kube-api-access-fttgw\") pod \"oauth-openshift-558db77b4-zkb77\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.053694 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.061078 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzqbt\" (UniqueName: \"kubernetes.io/projected/f5a9727e-9d16-4c1c-9279-ab4bb06fd41d-kube-api-access-bzqbt\") pod \"openshift-apiserver-operator-796bbdcf4f-vntvs\" (UID: \"f5a9727e-9d16-4c1c-9279-ab4bb06fd41d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.065138 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.065624 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/810f3b12-b157-4a65-becc-0490f489bcd9-srv-cert\") pod \"catalog-operator-68c6474976-hp6hs\" (UID: \"810f3b12-b157-4a65-becc-0490f489bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.065700 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbaa55a-3008-4dc1-bc39-460904964ec3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zwrwk\" (UID: \"9bbaa55a-3008-4dc1-bc39-460904964ec3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.065826 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.565790066 +0000 UTC m=+157.165577975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.066015 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.066125 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bsjcr\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.066414 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-key\") pod \"service-ca-9c57cc56f-vknkf\" (UID: \"cdb22f54-0343-40c5-94d9-9a743e7b875c\") " pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.067306 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bsjcr\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.067410 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df739f36-70f7-4dd4-a86b-4aa6e65a3465-proxy-tls\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.067546 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a73f73a2-1335-45a7-867b-18585f1c0862-config-volume\") pod \"collect-profiles-29481345-v9njs\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.067653 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-cabundle\") pod \"service-ca-9c57cc56f-vknkf\" (UID: \"cdb22f54-0343-40c5-94d9-9a743e7b875c\") " pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.067812 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-certs\") pod \"machine-config-server-pq7b2\" (UID: \"1d90cae4-9acf-48f9-84ac-373717661814\") " pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.067859 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/df739f36-70f7-4dd4-a86b-4aa6e65a3465-images\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.067884 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.56785667 +0000 UTC m=+157.167644569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.067969 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3ae44e5-109c-4893-968a-84304c3edcfb-config\") pod \"service-ca-operator-777779d784-lh4fv\" (UID: \"c3ae44e5-109c-4893-968a-84304c3edcfb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.068055 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ae44e5-109c-4893-968a-84304c3edcfb-serving-cert\") pod \"service-ca-operator-777779d784-lh4fv\" (UID: \"c3ae44e5-109c-4893-968a-84304c3edcfb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.068162 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-node-bootstrap-token\") pod \"machine-config-server-pq7b2\" (UID: \"1d90cae4-9acf-48f9-84ac-373717661814\") " pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.069637 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/df739f36-70f7-4dd4-a86b-4aa6e65a3465-images\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.070376 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3ae44e5-109c-4893-968a-84304c3edcfb-config\") pod \"service-ca-operator-777779d784-lh4fv\" (UID: \"c3ae44e5-109c-4893-968a-84304c3edcfb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.072875 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/810f3b12-b157-4a65-becc-0490f489bcd9-srv-cert\") pod \"catalog-operator-68c6474976-hp6hs\" (UID: \"810f3b12-b157-4a65-becc-0490f489bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.073075 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-cabundle\") pod \"service-ca-9c57cc56f-vknkf\" (UID: \"cdb22f54-0343-40c5-94d9-9a743e7b875c\") " pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.073310 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a73f73a2-1335-45a7-867b-18585f1c0862-config-volume\") pod \"collect-profiles-29481345-v9njs\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.073853 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bsjcr\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.074914 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbaa55a-3008-4dc1-bc39-460904964ec3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zwrwk\" (UID: \"9bbaa55a-3008-4dc1-bc39-460904964ec3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.076104 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ae44e5-109c-4893-968a-84304c3edcfb-serving-cert\") pod \"service-ca-operator-777779d784-lh4fv\" (UID: \"c3ae44e5-109c-4893-968a-84304c3edcfb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.077148 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bsjcr\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.077503 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df739f36-70f7-4dd4-a86b-4aa6e65a3465-proxy-tls\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.081182 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cdb22f54-0343-40c5-94d9-9a743e7b875c-signing-key\") pod \"service-ca-9c57cc56f-vknkf\" (UID: \"cdb22f54-0343-40c5-94d9-9a743e7b875c\") " pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.089113 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7qmn\" (UniqueName: \"kubernetes.io/projected/f35df32f-6245-445e-95c6-c419d45ab949-kube-api-access-w7qmn\") pod \"authentication-operator-69f744f599-hlwdq\" (UID: \"f35df32f-6245-445e-95c6-c419d45ab949\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.101735 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.114531 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhdgl\" (UniqueName: \"kubernetes.io/projected/fb731558-acf5-4738-b505-c7ab65dbc2cf-kube-api-access-rhdgl\") pod \"machine-approver-56656f9798-2f8zg\" (UID: \"fb731558-acf5-4738-b505-c7ab65dbc2cf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.121206 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpgx7\" (UniqueName: \"kubernetes.io/projected/ee0cebe3-90ce-4443-8c95-4ac23ed2b98c-kube-api-access-wpgx7\") pod \"machine-api-operator-5694c8668f-n8lsh\" (UID: \"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.144936 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpjbp\" (UniqueName: \"kubernetes.io/projected/4fa80055-6c27-434c-b6b3-166af5828101-kube-api-access-bpjbp\") pod \"controller-manager-879f6c89f-ff5xs\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.162602 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grsgl\" (UniqueName: \"kubernetes.io/projected/a703a3d0-8ee9-4319-b2e0-0e0292eb8d98-kube-api-access-grsgl\") pod \"cluster-samples-operator-665b6dd947-bn8jb\" (UID: \"a703a3d0-8ee9-4319-b2e0-0e0292eb8d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.169611 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.169753 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.669728073 +0000 UTC m=+157.269515942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.170073 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.170585 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.670571059 +0000 UTC m=+157.270358928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.179624 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62zbx\" (UniqueName: \"kubernetes.io/projected/395759b1-2c0e-4592-9b92-afb458e31327-kube-api-access-62zbx\") pod \"apiserver-76f77b778f-hd2t9\" (UID: \"395759b1-2c0e-4592-9b92-afb458e31327\") " pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.221923 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.230356 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.241257 4898 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.244738 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.254002 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvcn\" (UniqueName: \"kubernetes.io/projected/baaced9e-4d77-491b-8898-028c9925a5c2-kube-api-access-fhvcn\") pod \"openshift-config-operator-7777fb866f-m85tn\" (UID: \"baaced9e-4d77-491b-8898-028c9925a5c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.255467 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.260668 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.267042 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-certs\") pod \"machine-config-server-pq7b2\" (UID: \"1d90cae4-9acf-48f9-84ac-373717661814\") " pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.271504 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.271728 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.77170531 +0000 UTC m=+157.371493179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.271951 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.272566 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.772545166 +0000 UTC m=+157.372333035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.276953 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.279412 4898 request.go:700] Waited for 1.937656514s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.281803 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.302535 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.308602 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.312684 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1d90cae4-9acf-48f9-84ac-373717661814-node-bootstrap-token\") pod \"machine-config-server-pq7b2\" (UID: \"1d90cae4-9acf-48f9-84ac-373717661814\") " pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.321049 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.324824 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.341068 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.345725 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.360924 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.365979 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.372829 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.373407 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.873390359 +0000 UTC m=+157.473178228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.380297 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.383734 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.389714 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.395947 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.402376 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.422336 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.440784 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.475426 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.476057 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:50.976034016 +0000 UTC m=+157.575821875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.482842 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59zpb\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-kube-api-access-59zpb\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.495910 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-bound-sa-token\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.546032 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hd2t9"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.551073 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmjgt\" (UniqueName: \"kubernetes.io/projected/dd9a227b-a085-42ce-b4b7-05fcfd678215-kube-api-access-tmjgt\") pod \"router-default-5444994796-bmzz9\" (UID: \"dd9a227b-a085-42ce-b4b7-05fcfd678215\") " pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.552191 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-g8rbm"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.556248 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.568078 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ff5xs"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.568767 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8xx8\" (UniqueName: \"kubernetes.io/projected/dbbae7cb-6e5f-4122-9a1c-f6117ed70def-kube-api-access-x8xx8\") pod \"openshift-controller-manager-operator-756b6f6bc6-57gxc\" (UID: \"dbbae7cb-6e5f-4122-9a1c-f6117ed70def\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.574929 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjwm7\" (UniqueName: \"kubernetes.io/projected/4360d6c0-d5f1-49ae-917b-86560151e7ff-kube-api-access-bjwm7\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvkxk\" (UID: \"4360d6c0-d5f1-49ae-917b-86560151e7ff\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.578102 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.578622 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:51.07860454 +0000 UTC m=+157.678392399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: W0120 03:51:50.584426 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c39e36d_f949_4ad3_ba7f_f4d2b9468a80.slice/crio-3a232ae2d941f4519cd358b73041311bea0c1f4cb9b7b307f3f9f4347c7c1531 WatchSource:0}: Error finding container 3a232ae2d941f4519cd358b73041311bea0c1f4cb9b7b307f3f9f4347c7c1531: Status 404 returned error can't find the container with id 3a232ae2d941f4519cd358b73041311bea0c1f4cb9b7b307f3f9f4347c7c1531 Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.598730 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7fcb\" (UniqueName: \"kubernetes.io/projected/8a0b7e05-ef31-426e-989f-a6ad6c710150-kube-api-access-v7fcb\") pod \"marketplace-operator-79b997595-bsjcr\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.609639 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.619077 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be3f72ea-b769-4522-8cf3-f4e326329cf7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:50 crc kubenswrapper[4898]: W0120 03:51:50.619172 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fa80055_6c27_434c_b6b3_166af5828101.slice/crio-2a3448d0863f91e8d2c8e2c20e8c5a8e0cd3e1cf34ab77de3635b91d07157d78 WatchSource:0}: Error finding container 2a3448d0863f91e8d2c8e2c20e8c5a8e0cd3e1cf34ab77de3635b91d07157d78: Status 404 returned error can't find the container with id 2a3448d0863f91e8d2c8e2c20e8c5a8e0cd3e1cf34ab77de3635b91d07157d78 Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.632845 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.636066 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" event={"ID":"4fa80055-6c27-434c-b6b3-166af5828101","Type":"ContainerStarted","Data":"2a3448d0863f91e8d2c8e2c20e8c5a8e0cd3e1cf34ab77de3635b91d07157d78"} Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.636827 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" event={"ID":"3a9b8d7c-e836-4661-856d-5a0e8276387e","Type":"ContainerStarted","Data":"6d60693da6adf94f622213c3beffaa1eb9159b058723905ca7b7c67bde9d2f34"} Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.637498 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" event={"ID":"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80","Type":"ContainerStarted","Data":"3a232ae2d941f4519cd358b73041311bea0c1f4cb9b7b307f3f9f4347c7c1531"} Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.638024 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7ae319d-3396-4567-8cbf-d9d331d01be4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4mccz\" (UID: \"f7ae319d-3396-4567-8cbf-d9d331d01be4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.638476 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" event={"ID":"fb731558-acf5-4738-b505-c7ab65dbc2cf","Type":"ContainerStarted","Data":"0d81955b3497ab2a1c5c35b70d20f62ccb175ec766c7e9cdd23747dd952e6158"} Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.639404 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" event={"ID":"395759b1-2c0e-4592-9b92-afb458e31327","Type":"ContainerStarted","Data":"9c306a1abb12efe39ae45dd9ed7eb79fe2223aa95b91e87f0316fd0c32aef836"} Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.661516 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9xft\" (UniqueName: \"kubernetes.io/projected/be3f72ea-b769-4522-8cf3-f4e326329cf7-kube-api-access-p9xft\") pod \"cluster-image-registry-operator-dc59b4c8b-w5czf\" (UID: \"be3f72ea-b769-4522-8cf3-f4e326329cf7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.674326 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmp7h\" (UniqueName: \"kubernetes.io/projected/00fedf08-d9d4-43f5-96ff-3f705c050a96-kube-api-access-fmp7h\") pod \"route-controller-manager-6576b87f9c-7jzrf\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.683130 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.683619 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:51.183602791 +0000 UTC m=+157.783390640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.701999 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5893dc-b521-419f-afc7-07dd1aaac395-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bmbwr\" (UID: \"7d5893dc-b521-419f-afc7-07dd1aaac395\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.713947 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.725685 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn7mq\" (UniqueName: \"kubernetes.io/projected/1c2a3f52-4642-4c41-8dad-ac50db0c6763-kube-api-access-gn7mq\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.735701 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh9kg\" (UniqueName: \"kubernetes.io/projected/101a38cf-ed10-4c3f-b9b6-fe33e34bbd21-kube-api-access-dh9kg\") pod \"packageserver-d55dfcdfc-vn6kr\" (UID: \"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.754570 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l9cpd"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.757541 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx725\" (UniqueName: \"kubernetes.io/projected/0e1ecdb2-e05e-42ad-8b1e-b1805600ab23-kube-api-access-dx725\") pod \"csi-hostpathplugin-4xpjs\" (UID: \"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23\") " pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.777912 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skd52\" (UniqueName: \"kubernetes.io/projected/a4852cac-3462-451a-b007-d9598c7acb67-kube-api-access-skd52\") pod \"multus-admission-controller-857f4d67dd-hv5vs\" (UID: \"a4852cac-3462-451a-b007-d9598c7acb67\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.784161 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.784410 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:51.284395611 +0000 UTC m=+157.884183470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.788758 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.793452 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.793878 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:51.293867502 +0000 UTC m=+157.893655351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.796863 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.803245 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46ccx\" (UniqueName: \"kubernetes.io/projected/cdb22f54-0343-40c5-94d9-9a743e7b875c-kube-api-access-46ccx\") pod \"service-ca-9c57cc56f-vknkf\" (UID: \"cdb22f54-0343-40c5-94d9-9a743e7b875c\") " pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.809763 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.818062 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.841965 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw857\" (UniqueName: \"kubernetes.io/projected/1d90cae4-9acf-48f9-84ac-373717661814-kube-api-access-hw857\") pod \"machine-config-server-pq7b2\" (UID: \"1d90cae4-9acf-48f9-84ac-373717661814\") " pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.842066 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zpcs\" (UniqueName: \"kubernetes.io/projected/4c15d357-55d5-4906-938d-4d47f3965b3b-kube-api-access-8zpcs\") pod \"olm-operator-6b444d44fb-4wb66\" (UID: \"4c15d357-55d5-4906-938d-4d47f3965b3b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.842459 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.846710 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.851873 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-n8lsh"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.858910 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.859929 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnwnq\" (UniqueName: \"kubernetes.io/projected/df739f36-70f7-4dd4-a86b-4aa6e65a3465-kube-api-access-hnwnq\") pod \"machine-config-operator-74547568cd-gwsf2\" (UID: \"df739f36-70f7-4dd4-a86b-4aa6e65a3465\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.864017 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.876783 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pjqr\" (UniqueName: \"kubernetes.io/projected/a73f73a2-1335-45a7-867b-18585f1c0862-kube-api-access-6pjqr\") pod \"collect-profiles-29481345-v9njs\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.880373 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bsjcr"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.889539 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.894208 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.894645 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:51.394630652 +0000 UTC m=+157.994418511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.894713 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.898335 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5nnv\" (UniqueName: \"kubernetes.io/projected/9bbaa55a-3008-4dc1-bc39-460904964ec3-kube-api-access-p5nnv\") pod \"package-server-manager-789f6589d5-zwrwk\" (UID: \"9bbaa55a-3008-4dc1-bc39-460904964ec3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.903373 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.905880 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m85tn"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.913074 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.914854 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zkb77"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.915487 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hlwdq"] Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.917154 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c2a3f52-4642-4c41-8dad-ac50db0c6763-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4qr7t\" (UID: \"1c2a3f52-4642-4c41-8dad-ac50db0c6763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.945033 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsvx5\" (UniqueName: \"kubernetes.io/projected/7cb386f8-d968-4790-b003-48452b55487c-kube-api-access-rsvx5\") pod \"etcd-operator-b45778765-kfxts\" (UID: \"7cb386f8-d968-4790-b003-48452b55487c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.957278 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.963150 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.964542 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4vnl\" (UniqueName: \"kubernetes.io/projected/a71ba0b6-92d4-4756-b286-f93ce475a236-kube-api-access-j4vnl\") pod \"kube-storage-version-migrator-operator-b67b599dd-qbgkv\" (UID: \"a71ba0b6-92d4-4756-b286-f93ce475a236\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.982887 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a54c7052-c047-4ef5-a201-796f444ad467-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kskjk\" (UID: \"a54c7052-c047-4ef5-a201-796f444ad467\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.986472 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.995585 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:50 crc kubenswrapper[4898]: I0120 03:51:50.995733 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pq7b2" Jan 20 03:51:50 crc kubenswrapper[4898]: E0120 03:51:50.995880 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:51.495866926 +0000 UTC m=+158.095654785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.002350 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs4r6\" (UniqueName: \"kubernetes.io/projected/766f27f8-ddbf-4cf7-909a-424958a89fe2-kube-api-access-gs4r6\") pod \"machine-config-controller-84d6567774-b2j6r\" (UID: \"766f27f8-ddbf-4cf7-909a-424958a89fe2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.012408 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.023564 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rglv\" (UniqueName: \"kubernetes.io/projected/810f3b12-b157-4a65-becc-0490f489bcd9-kube-api-access-7rglv\") pod \"catalog-operator-68c6474976-hp6hs\" (UID: \"810f3b12-b157-4a65-becc-0490f489bcd9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.033360 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8j42\" (UniqueName: \"kubernetes.io/projected/ccd714e4-5975-4306-bf59-a1542a08367b-kube-api-access-q8j42\") pod \"downloads-7954f5f757-tbzbw\" (UID: \"ccd714e4-5975-4306-bf59-a1542a08367b\") " pod="openshift-console/downloads-7954f5f757-tbzbw" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.039105 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.057556 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8fzg\" (UniqueName: \"kubernetes.io/projected/480eb1b9-9ac2-4353-9216-751da9b33e4f-kube-api-access-t8fzg\") pod \"console-f9d7485db-ctlvr\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.066049 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.070492 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tbzbw" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.076040 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.077261 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvs8s\" (UniqueName: \"kubernetes.io/projected/c3ae44e5-109c-4893-968a-84304c3edcfb-kube-api-access-cvs8s\") pod \"service-ca-operator-777779d784-lh4fv\" (UID: \"c3ae44e5-109c-4893-968a-84304c3edcfb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.083589 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.087290 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.096124 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:51 crc kubenswrapper[4898]: E0120 03:51:51.096379 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:51.596365567 +0000 UTC m=+158.196153426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.097904 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s56mg\" (UniqueName: \"kubernetes.io/projected/7b2b0787-24b7-42e6-b0a6-86eaa18560a8-kube-api-access-s56mg\") pod \"migrator-59844c95c7-8sdnz\" (UID: \"7b2b0787-24b7-42e6-b0a6-86eaa18560a8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.101731 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.134798 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.150294 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.152462 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.172029 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.181769 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.200342 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/754daf08-c014-474b-9ec9-ca10b014b002-cert\") pod \"ingress-canary-z2lzs\" (UID: \"754daf08-c014-474b-9ec9-ca10b014b002\") " pod="openshift-ingress-canary/ingress-canary-z2lzs" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.202145 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-487gc\" (UniqueName: \"kubernetes.io/projected/ce8bbf7a-5333-4080-bd34-d2c183b87d5e-kube-api-access-487gc\") pod \"dns-default-2xvbf\" (UID: \"ce8bbf7a-5333-4080-bd34-d2c183b87d5e\") " pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.202285 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce8bbf7a-5333-4080-bd34-d2c183b87d5e-config-volume\") pod \"dns-default-2xvbf\" (UID: \"ce8bbf7a-5333-4080-bd34-d2c183b87d5e\") " pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.203278 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.203317 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjrz2\" (UniqueName: \"kubernetes.io/projected/754daf08-c014-474b-9ec9-ca10b014b002-kube-api-access-fjrz2\") pod \"ingress-canary-z2lzs\" (UID: \"754daf08-c014-474b-9ec9-ca10b014b002\") " pod="openshift-ingress-canary/ingress-canary-z2lzs" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.203356 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce8bbf7a-5333-4080-bd34-d2c183b87d5e-metrics-tls\") pod \"dns-default-2xvbf\" (UID: \"ce8bbf7a-5333-4080-bd34-d2c183b87d5e\") " pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:51 crc kubenswrapper[4898]: E0120 03:51:51.212342 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:51.712316904 +0000 UTC m=+158.312104973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.215695 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.219176 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.306683 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.307010 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/754daf08-c014-474b-9ec9-ca10b014b002-cert\") pod \"ingress-canary-z2lzs\" (UID: \"754daf08-c014-474b-9ec9-ca10b014b002\") " pod="openshift-ingress-canary/ingress-canary-z2lzs" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.307058 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-487gc\" (UniqueName: \"kubernetes.io/projected/ce8bbf7a-5333-4080-bd34-d2c183b87d5e-kube-api-access-487gc\") pod \"dns-default-2xvbf\" (UID: \"ce8bbf7a-5333-4080-bd34-d2c183b87d5e\") " pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.307099 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce8bbf7a-5333-4080-bd34-d2c183b87d5e-config-volume\") pod \"dns-default-2xvbf\" (UID: \"ce8bbf7a-5333-4080-bd34-d2c183b87d5e\") " pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.307170 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjrz2\" (UniqueName: \"kubernetes.io/projected/754daf08-c014-474b-9ec9-ca10b014b002-kube-api-access-fjrz2\") pod \"ingress-canary-z2lzs\" (UID: \"754daf08-c014-474b-9ec9-ca10b014b002\") " pod="openshift-ingress-canary/ingress-canary-z2lzs" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.307188 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce8bbf7a-5333-4080-bd34-d2c183b87d5e-metrics-tls\") pod \"dns-default-2xvbf\" (UID: \"ce8bbf7a-5333-4080-bd34-d2c183b87d5e\") " pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.308861 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce8bbf7a-5333-4080-bd34-d2c183b87d5e-config-volume\") pod \"dns-default-2xvbf\" (UID: \"ce8bbf7a-5333-4080-bd34-d2c183b87d5e\") " pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:51 crc kubenswrapper[4898]: E0120 03:51:51.309460 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:51.809423531 +0000 UTC m=+158.409211420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.317025 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce8bbf7a-5333-4080-bd34-d2c183b87d5e-metrics-tls\") pod \"dns-default-2xvbf\" (UID: \"ce8bbf7a-5333-4080-bd34-d2c183b87d5e\") " pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.320947 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/754daf08-c014-474b-9ec9-ca10b014b002-cert\") pod \"ingress-canary-z2lzs\" (UID: \"754daf08-c014-474b-9ec9-ca10b014b002\") " pod="openshift-ingress-canary/ingress-canary-z2lzs" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.350147 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-487gc\" (UniqueName: \"kubernetes.io/projected/ce8bbf7a-5333-4080-bd34-d2c183b87d5e-kube-api-access-487gc\") pod \"dns-default-2xvbf\" (UID: \"ce8bbf7a-5333-4080-bd34-d2c183b87d5e\") " pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.357709 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjrz2\" (UniqueName: \"kubernetes.io/projected/754daf08-c014-474b-9ec9-ca10b014b002-kube-api-access-fjrz2\") pod \"ingress-canary-z2lzs\" (UID: \"754daf08-c014-474b-9ec9-ca10b014b002\") " pod="openshift-ingress-canary/ingress-canary-z2lzs" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.378798 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.412543 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:51 crc kubenswrapper[4898]: E0120 03:51:51.412974 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:51.912961726 +0000 UTC m=+158.512749585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.520380 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:51 crc kubenswrapper[4898]: E0120 03:51:51.521384 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:52.021359791 +0000 UTC m=+158.621147660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.594122 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4xpjs"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.601225 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.610985 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z2lzs" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.625524 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:51 crc kubenswrapper[4898]: E0120 03:51:51.625843 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:52.125831954 +0000 UTC m=+158.725619813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.647492 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.665059 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" event={"ID":"fb731558-acf5-4738-b505-c7ab65dbc2cf","Type":"ContainerStarted","Data":"0909d046fa600a229cc90f42588c3e694a770697d3382dc436f35cbdb1adebf7"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.671144 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" event={"ID":"f7ae319d-3396-4567-8cbf-d9d331d01be4","Type":"ContainerStarted","Data":"34e6068784a181773b26131e2abfddd85e8f693ffd0e5fc72c580658b513a827"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.681582 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" event={"ID":"f5a9727e-9d16-4c1c-9279-ab4bb06fd41d","Type":"ContainerStarted","Data":"4fcc7146319d95c168e2ea2221a90fb996139aec8263e8cb833ec464e548d409"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.686059 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" event={"ID":"4fa80055-6c27-434c-b6b3-166af5828101","Type":"ContainerStarted","Data":"d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.686461 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.688282 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" event={"ID":"4c15d357-55d5-4906-938d-4d47f3965b3b","Type":"ContainerStarted","Data":"6c319f41fb782849bd1164722bde2f165f35782530bcf5c71ce7adbd72e30d09"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.689630 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l9cpd" event={"ID":"12b38f53-df50-4c41-bb9a-c4922ce023b2","Type":"ContainerStarted","Data":"ffd404a9d0d2c91c7024ac5b2dbad9eea0e851fc5271258dc690ccb5aa3af6d0"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.689653 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l9cpd" event={"ID":"12b38f53-df50-4c41-bb9a-c4922ce023b2","Type":"ContainerStarted","Data":"c136f095aeca2bafea912389a49aeb56594ee7680eddaa5f1ddcc74829b941ae"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.690046 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.703660 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hv5vs"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.705377 4898 generic.go:334] "Generic (PLEG): container finished" podID="395759b1-2c0e-4592-9b92-afb458e31327" containerID="0b43e183f99dfcc009507ef7ab884bd6370553b8149e25229079f8afc0a77b56" exitCode=0 Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.705471 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" event={"ID":"395759b1-2c0e-4592-9b92-afb458e31327","Type":"ContainerDied","Data":"0b43e183f99dfcc009507ef7ab884bd6370553b8149e25229079f8afc0a77b56"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.711964 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" event={"ID":"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c","Type":"ContainerStarted","Data":"559c9c69b71a5a0e9dddfd0a083fea6c87f94e9772797cc4daf6da3c80682223"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.712000 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" event={"ID":"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c","Type":"ContainerStarted","Data":"be8c5bd17e2b87fa71cd72d705190c9ad10140478b035a41eb3b841f8bf45702"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.713767 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" event={"ID":"8a0b7e05-ef31-426e-989f-a6ad6c710150","Type":"ContainerStarted","Data":"1b4e93131c24ebce932ea872461966c226feb328a1c4fce8d964c76a9e177da8"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.715288 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bmzz9" event={"ID":"dd9a227b-a085-42ce-b4b7-05fcfd678215","Type":"ContainerStarted","Data":"342149b01c5462daa99109e9b74251b674cc716352bc2e6dca037a1476c2ee9f"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.719983 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" event={"ID":"7d5893dc-b521-419f-afc7-07dd1aaac395","Type":"ContainerStarted","Data":"4392a53ee8b4892938db14e43fa95b23745f58064a85753b17a22564b0e39a93"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.728565 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:51 crc kubenswrapper[4898]: E0120 03:51:51.729000 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:52.228985767 +0000 UTC m=+158.828773616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.762307 4898 generic.go:334] "Generic (PLEG): container finished" podID="8c39e36d-f949-4ad3-ba7f-f4d2b9468a80" containerID="287656a8848b3581aa3c8306acfd9f1aa8d1732e5b44db65c8c7c7b70902a659" exitCode=0 Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.771490 4898 generic.go:334] "Generic (PLEG): container finished" podID="baaced9e-4d77-491b-8898-028c9925a5c2" containerID="82aa3a7ef1fde268ce2c4f5d94efdc07ea61418cf4765dd3fd0f15ba6727692f" exitCode=0 Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.844949 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" event={"ID":"620006cf-5c3f-457c-a416-30384cf951ec","Type":"ContainerStarted","Data":"409e57cd9dab888e981e9956d53e61494e87187fb3ef025544cef63111338965"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845307 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" event={"ID":"3a9b8d7c-e836-4661-856d-5a0e8276387e","Type":"ContainerStarted","Data":"257e10bc4b62e6282dc5316e7cff895870c2f99d1a6c4369b4c30fccf556a87f"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845323 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845337 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" event={"ID":"a703a3d0-8ee9-4319-b2e0-0e0292eb8d98","Type":"ContainerStarted","Data":"cee09d1451b3f2540762953e094b515f628d7a2f02e4360a850715bf1365e73c"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845348 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845362 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr"] Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845373 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk" event={"ID":"4360d6c0-d5f1-49ae-917b-86560151e7ff","Type":"ContainerStarted","Data":"b5901127c826ccc4407db8a5d505bd9e94f378910741ac2e85221a6deb8d11a5"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845391 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pq7b2" event={"ID":"1d90cae4-9acf-48f9-84ac-373717661814","Type":"ContainerStarted","Data":"a82caf58cfcefdbae9e9a8b7221222dbbefd3344f314b6b0e1d2c9b3fda12941"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845401 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" event={"ID":"f35df32f-6245-445e-95c6-c419d45ab949","Type":"ContainerStarted","Data":"a683e9cc0d1f9080151e363e2de0f86e6f5861fdf3a2342a4cc939dc901328be"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845411 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" event={"ID":"be3f72ea-b769-4522-8cf3-f4e326329cf7","Type":"ContainerStarted","Data":"5d092de9685f660db16dd8e0359253980bc1da8c14ec31f64b2729404675e0da"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845420 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" event={"ID":"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23","Type":"ContainerStarted","Data":"1bea31f7ef3f08fcd079c62776c258ede2db7286165c91ad0a90aa182ee75320"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845459 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" event={"ID":"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80","Type":"ContainerDied","Data":"287656a8848b3581aa3c8306acfd9f1aa8d1732e5b44db65c8c7c7b70902a659"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845474 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" event={"ID":"baaced9e-4d77-491b-8898-028c9925a5c2","Type":"ContainerDied","Data":"82aa3a7ef1fde268ce2c4f5d94efdc07ea61418cf4765dd3fd0f15ba6727692f"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845493 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" event={"ID":"baaced9e-4d77-491b-8898-028c9925a5c2","Type":"ContainerStarted","Data":"f00072016e3c7955bcdafddc173e7ed7eafe00752d11b15b4bd264c075f61ac0"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.845501 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" event={"ID":"dbbae7cb-6e5f-4122-9a1c-f6117ed70def","Type":"ContainerStarted","Data":"45fa795ea5efbdb3646caab1a5e40bcdc99d5fa03358f5923cfea9df8cbcbe44"} Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.851319 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.851403 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:51 crc kubenswrapper[4898]: E0120 03:51:51.882917 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:52.382898931 +0000 UTC m=+158.982686790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.904460 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-l9cpd" Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.953124 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:51 crc kubenswrapper[4898]: E0120 03:51:51.954042 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:52.4540257 +0000 UTC m=+159.053813559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:51 crc kubenswrapper[4898]: I0120 03:51:51.988754 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.056735 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:52 crc kubenswrapper[4898]: E0120 03:51:52.057282 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:52.557269836 +0000 UTC m=+159.157057695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.157566 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:52 crc kubenswrapper[4898]: E0120 03:51:52.157949 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:52.657933952 +0000 UTC m=+159.257721801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.232623 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ctlvr"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.257663 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vknkf"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.259140 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:52 crc kubenswrapper[4898]: E0120 03:51:52.259529 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:52.759502487 +0000 UTC m=+159.359290346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:52 crc kubenswrapper[4898]: W0120 03:51:52.309284 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda73f73a2_1335_45a7_867b_18585f1c0862.slice/crio-7ab114c584c64f45456eb0847d3c9287438e3215fd8f96fbf943f2889970ad01 WatchSource:0}: Error finding container 7ab114c584c64f45456eb0847d3c9287438e3215fd8f96fbf943f2889970ad01: Status 404 returned error can't find the container with id 7ab114c584c64f45456eb0847d3c9287438e3215fd8f96fbf943f2889970ad01 Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.363356 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:52 crc kubenswrapper[4898]: E0120 03:51:52.363697 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:52.863679441 +0000 UTC m=+159.463467300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:52 crc kubenswrapper[4898]: W0120 03:51:52.444923 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod480eb1b9_9ac2_4353_9216_751da9b33e4f.slice/crio-b47d6c41e5fd6f0ce0131919f584031a09f2b36459f9f89edaca7079e5ff839d WatchSource:0}: Error finding container b47d6c41e5fd6f0ce0131919f584031a09f2b36459f9f89edaca7079e5ff839d: Status 404 returned error can't find the container with id b47d6c41e5fd6f0ce0131919f584031a09f2b36459f9f89edaca7079e5ff839d Jan 20 03:51:52 crc kubenswrapper[4898]: W0120 03:51:52.456174 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdb22f54_0343_40c5_94d9_9a743e7b875c.slice/crio-1306c8f887c0fa7a6278dbf0e396aa7e35cfc00d9182c3a29330ea941a15bbc7 WatchSource:0}: Error finding container 1306c8f887c0fa7a6278dbf0e396aa7e35cfc00d9182c3a29330ea941a15bbc7: Status 404 returned error can't find the container with id 1306c8f887c0fa7a6278dbf0e396aa7e35cfc00d9182c3a29330ea941a15bbc7 Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.466038 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:52 crc kubenswrapper[4898]: E0120 03:51:52.466406 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:52.966392901 +0000 UTC m=+159.566180750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.567772 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:52 crc kubenswrapper[4898]: E0120 03:51:52.568027 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:53.068011657 +0000 UTC m=+159.667799516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.593291 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t"] Jan 20 03:51:52 crc kubenswrapper[4898]: W0120 03:51:52.645416 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c2a3f52_4642_4c41_8dad_ac50db0c6763.slice/crio-a055413b11c3a144dfe149970f3a9f980eab93fa31a18b68e96b6f64fdead086 WatchSource:0}: Error finding container a055413b11c3a144dfe149970f3a9f980eab93fa31a18b68e96b6f64fdead086: Status 404 returned error can't find the container with id a055413b11c3a144dfe149970f3a9f980eab93fa31a18b68e96b6f64fdead086 Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.682295 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:52 crc kubenswrapper[4898]: E0120 03:51:52.682608 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:53.182599102 +0000 UTC m=+159.782386961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.744077 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.760552 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.760705 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kfxts"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.782384 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2xvbf"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.783043 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:52 crc kubenswrapper[4898]: E0120 03:51:52.783345 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:53.28332925 +0000 UTC m=+159.883117109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.826613 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" event={"ID":"cdb22f54-0343-40c5-94d9-9a743e7b875c","Type":"ContainerStarted","Data":"1306c8f887c0fa7a6278dbf0e396aa7e35cfc00d9182c3a29330ea941a15bbc7"} Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.864540 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.864728 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" podStartSLOduration=135.864717654 podStartE2EDuration="2m15.864717654s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:52.863223677 +0000 UTC m=+159.463011536" watchObservedRunningTime="2026-01-20 03:51:52.864717654 +0000 UTC m=+159.464505513" Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.867655 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" event={"ID":"ee0cebe3-90ce-4443-8c95-4ac23ed2b98c","Type":"ContainerStarted","Data":"3366c662b632c1940433e2ffda4e193ce14d7722a64497840cb337c5af288946"} Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.884513 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:52 crc kubenswrapper[4898]: E0120 03:51:52.884845 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:53.384833563 +0000 UTC m=+159.984621412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.897823 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tbzbw"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.898398 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-l9cpd" podStartSLOduration=135.898373699 podStartE2EDuration="2m15.898373699s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:52.896301335 +0000 UTC m=+159.496089194" watchObservedRunningTime="2026-01-20 03:51:52.898373699 +0000 UTC m=+159.498161558" Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.920636 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.925389 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z2lzs"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.934447 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.958168 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv"] Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.962002 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" event={"ID":"8a0b7e05-ef31-426e-989f-a6ad6c710150","Type":"ContainerStarted","Data":"3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1"} Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.964115 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.976065 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-n8lsh" podStartSLOduration=135.976035458 podStartE2EDuration="2m15.976035458s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:52.963855183 +0000 UTC m=+159.563643042" watchObservedRunningTime="2026-01-20 03:51:52.976035458 +0000 UTC m=+159.575823317" Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.978085 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" event={"ID":"a4852cac-3462-451a-b007-d9598c7acb67","Type":"ContainerStarted","Data":"d95a8832496421f1c52cd10d9312d672d9f0df53ba7044c9cb84d410fd158fab"} Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.986404 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:52 crc kubenswrapper[4898]: E0120 03:51:52.987756 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:53.487730268 +0000 UTC m=+160.087518127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:52 crc kubenswrapper[4898]: I0120 03:51:52.997597 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" podStartSLOduration=135.99757425 podStartE2EDuration="2m15.99757425s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:52.997179908 +0000 UTC m=+159.596967767" watchObservedRunningTime="2026-01-20 03:51:52.99757425 +0000 UTC m=+159.597362109" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.001941 4898 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bsjcr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.002003 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" podUID="8a0b7e05-ef31-426e-989f-a6ad6c710150" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.002625 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" event={"ID":"4c15d357-55d5-4906-938d-4d47f3965b3b","Type":"ContainerStarted","Data":"b6064df5002d06b82a413c820d916e1dcbef47dfbd33a5084c119459773b3828"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.003422 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.031046 4898 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4wb66 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.031107 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" podUID="4c15d357-55d5-4906-938d-4d47f3965b3b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.051959 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" podStartSLOduration=136.051939873 podStartE2EDuration="2m16.051939873s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.047504356 +0000 UTC m=+159.647292215" watchObservedRunningTime="2026-01-20 03:51:53.051939873 +0000 UTC m=+159.651727732" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.056977 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" event={"ID":"620006cf-5c3f-457c-a416-30384cf951ec","Type":"ContainerStarted","Data":"53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.057407 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.072408 4898 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-zkb77 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.075793 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" podUID="620006cf-5c3f-457c-a416-30384cf951ec" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.099802 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:53 crc kubenswrapper[4898]: E0120 03:51:53.102378 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:53.602365804 +0000 UTC m=+160.202153663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.138768 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" event={"ID":"a73f73a2-1335-45a7-867b-18585f1c0862","Type":"ContainerStarted","Data":"7ab114c584c64f45456eb0847d3c9287438e3215fd8f96fbf943f2889970ad01"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.150583 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" event={"ID":"f5a9727e-9d16-4c1c-9279-ab4bb06fd41d","Type":"ContainerStarted","Data":"97b3f91a3683ec16245fea4f16643c77a8f562970db20f96552ab19c56aa31d0"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.159857 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" event={"ID":"dbbae7cb-6e5f-4122-9a1c-f6117ed70def","Type":"ContainerStarted","Data":"2f6a061d6bfed498fcf54d82bbd3a03a6bdf033b4294826ba5400ed8d73506f3"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.183748 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" event={"ID":"f35df32f-6245-445e-95c6-c419d45ab949","Type":"ContainerStarted","Data":"99fe088f79325187a5e841c8d51c6ab7fed689e07b8bc49e6ac85559b1b426a2"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.212259 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" podStartSLOduration=136.212234124 podStartE2EDuration="2m16.212234124s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.211631395 +0000 UTC m=+159.811419244" watchObservedRunningTime="2026-01-20 03:51:53.212234124 +0000 UTC m=+159.812021983" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.212766 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.213599 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" podStartSLOduration=136.213593175 podStartE2EDuration="2m16.213593175s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.131122209 +0000 UTC m=+159.730910068" watchObservedRunningTime="2026-01-20 03:51:53.213593175 +0000 UTC m=+159.813381034" Jan 20 03:51:53 crc kubenswrapper[4898]: E0120 03:51:53.226799 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:53.72675874 +0000 UTC m=+160.326546599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.250537 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hlwdq" podStartSLOduration=136.250515171 podStartE2EDuration="2m16.250515171s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.24920059 +0000 UTC m=+159.848988449" watchObservedRunningTime="2026-01-20 03:51:53.250515171 +0000 UTC m=+159.850303030" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.305092 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bmzz9" event={"ID":"dd9a227b-a085-42ce-b4b7-05fcfd678215","Type":"ContainerStarted","Data":"34fbc3417fc26f3ae572e02d4cc3a5cf6d917262f1f44e22ee555b787b91ff50"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.317118 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:53 crc kubenswrapper[4898]: E0120 03:51:53.319301 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:53.819289496 +0000 UTC m=+160.419077355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.352847 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" event={"ID":"df739f36-70f7-4dd4-a86b-4aa6e65a3465","Type":"ContainerStarted","Data":"4e624e3c2f35b4b0016e0e3b074d6a8f6108d8190d7675e90ab1235846eeaefd"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.380520 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vntvs" podStartSLOduration=136.380501629 podStartE2EDuration="2m16.380501629s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.366123298 +0000 UTC m=+159.965911157" watchObservedRunningTime="2026-01-20 03:51:53.380501629 +0000 UTC m=+159.980289488" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.409268 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" event={"ID":"1c2a3f52-4642-4c41-8dad-ac50db0c6763","Type":"ContainerStarted","Data":"a055413b11c3a144dfe149970f3a9f980eab93fa31a18b68e96b6f64fdead086"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.430805 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:53 crc kubenswrapper[4898]: E0120 03:51:53.432307 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:53.932277942 +0000 UTC m=+160.532065801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.443800 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk" event={"ID":"4360d6c0-d5f1-49ae-917b-86560151e7ff","Type":"ContainerStarted","Data":"81ac13d87ad62a0d75700063e91cb9bb6f55aedbf4a3880dabb45f1f8d4073ed"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.471719 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" event={"ID":"be3f72ea-b769-4522-8cf3-f4e326329cf7","Type":"ContainerStarted","Data":"efa5c50565e269de6f0fa50105fb48e46b0c8ef39ec0ec75acd699cdef02ded2"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.475336 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" event={"ID":"9bbaa55a-3008-4dc1-bc39-460904964ec3","Type":"ContainerStarted","Data":"91f8dfd84000d3eb2c70f2f83001c353b0582fda2bcc2c6c66fb1c74b7342029"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.491911 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-57gxc" podStartSLOduration=136.491877816 podStartE2EDuration="2m16.491877816s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.429740034 +0000 UTC m=+160.029527893" watchObservedRunningTime="2026-01-20 03:51:53.491877816 +0000 UTC m=+160.091665675" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.493461 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" podStartSLOduration=136.493453274 podStartE2EDuration="2m16.493453274s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.491270747 +0000 UTC m=+160.091058606" watchObservedRunningTime="2026-01-20 03:51:53.493453274 +0000 UTC m=+160.093241133" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.508116 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" event={"ID":"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21","Type":"ContainerStarted","Data":"440c7689879e1a79f1049a0131a0f9155e1c1e2d2dbeb7da43a23ed1e371f8dc"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.509213 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.511130 4898 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vn6kr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.511217 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" podUID="101a38cf-ed10-4c3f-b9b6-fe33e34bbd21" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.534236 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:53 crc kubenswrapper[4898]: E0120 03:51:53.538850 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:54.03882989 +0000 UTC m=+160.638617749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.556572 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-bmzz9" podStartSLOduration=136.556554995 podStartE2EDuration="2m16.556554995s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.555968187 +0000 UTC m=+160.155756046" watchObservedRunningTime="2026-01-20 03:51:53.556554995 +0000 UTC m=+160.156342854" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.609148 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" event={"ID":"fb731558-acf5-4738-b505-c7ab65dbc2cf","Type":"ContainerStarted","Data":"def8b633553dea36e5b3ba8e700fd6ed45372a2123e592fb8722d9c1cef0f6f1"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.635646 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:53 crc kubenswrapper[4898]: E0120 03:51:53.636478 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:54.136453883 +0000 UTC m=+160.736241742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.677521 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.685533 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" event={"ID":"00fedf08-d9d4-43f5-96ff-3f705c050a96","Type":"ContainerStarted","Data":"181ed1718dd13908fdb254b17fcae1772dfabdc04826fb1139f75a1747d98f46"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.686630 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.700526 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" podStartSLOduration=136.700503423 podStartE2EDuration="2m16.700503423s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.657815529 +0000 UTC m=+160.257603388" watchObservedRunningTime="2026-01-20 03:51:53.700503423 +0000 UTC m=+160.300291282" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.700710 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvkxk" podStartSLOduration=136.700704679 podStartE2EDuration="2m16.700704679s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.699238614 +0000 UTC m=+160.299026473" watchObservedRunningTime="2026-01-20 03:51:53.700704679 +0000 UTC m=+160.300492538" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.715201 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pq7b2" event={"ID":"1d90cae4-9acf-48f9-84ac-373717661814","Type":"ContainerStarted","Data":"902c9329239ee07d971d3a09ded3a52c8278c562230243cc7f2087bd8e16d6fa"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.752618 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:53 crc kubenswrapper[4898]: E0120 03:51:53.753777 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:54.253765931 +0000 UTC m=+160.853553790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.755134 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w5czf" podStartSLOduration=136.755105463 podStartE2EDuration="2m16.755105463s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.752046809 +0000 UTC m=+160.351834668" watchObservedRunningTime="2026-01-20 03:51:53.755105463 +0000 UTC m=+160.354893322" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.790586 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ctlvr" event={"ID":"480eb1b9-9ac2-4353-9216-751da9b33e4f","Type":"ContainerStarted","Data":"b47d6c41e5fd6f0ce0131919f584031a09f2b36459f9f89edaca7079e5ff839d"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.801194 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" event={"ID":"a703a3d0-8ee9-4319-b2e0-0e0292eb8d98","Type":"ContainerStarted","Data":"da90d01e4364c31e435b5defdb4b2765e894beebf01b57ae79178d8fe8a8db09"} Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.820828 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.853920 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:53 crc kubenswrapper[4898]: E0120 03:51:53.855198 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:54.355175231 +0000 UTC m=+160.954963090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.873573 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:51:53 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:51:53 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:51:53 crc kubenswrapper[4898]: healthz check failed Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.874010 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.961562 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:53 crc kubenswrapper[4898]: E0120 03:51:53.963096 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:54.463070039 +0000 UTC m=+161.062857898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:53 crc kubenswrapper[4898]: I0120 03:51:53.987053 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" podStartSLOduration=136.986875792 podStartE2EDuration="2m16.986875792s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:53.985725106 +0000 UTC m=+160.585512985" watchObservedRunningTime="2026-01-20 03:51:53.986875792 +0000 UTC m=+160.586663651" Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.064842 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:54 crc kubenswrapper[4898]: E0120 03:51:54.065272 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:54.565253513 +0000 UTC m=+161.165041372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.167560 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:54 crc kubenswrapper[4898]: E0120 03:51:54.168010 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:54.667989143 +0000 UTC m=+161.267777002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.252573 4898 csr.go:261] certificate signing request csr-6hmpw is approved, waiting to be issued Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.255354 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.267679 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-pq7b2" podStartSLOduration=6.267645279 podStartE2EDuration="6.267645279s" podCreationTimestamp="2026-01-20 03:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:54.211150331 +0000 UTC m=+160.810938190" watchObservedRunningTime="2026-01-20 03:51:54.267645279 +0000 UTC m=+160.867433138" Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.270804 4898 csr.go:257] certificate signing request csr-6hmpw is issued Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.271228 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:54 crc kubenswrapper[4898]: E0120 03:51:54.271725 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:54.771709904 +0000 UTC m=+161.371497763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.370022 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.374115 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:54 crc kubenswrapper[4898]: E0120 03:51:54.374616 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:54.874596578 +0000 UTC m=+161.474384437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.439411 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" podStartSLOduration=137.439392002 podStartE2EDuration="2m17.439392002s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:54.437024199 +0000 UTC m=+161.036812058" watchObservedRunningTime="2026-01-20 03:51:54.439392002 +0000 UTC m=+161.039179851" Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.475513 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:54 crc kubenswrapper[4898]: E0120 03:51:54.476724 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:54.97670385 +0000 UTC m=+161.576491709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.579553 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:54 crc kubenswrapper[4898]: E0120 03:51:54.580238 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:55.080219784 +0000 UTC m=+161.680007633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.617177 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2f8zg" podStartSLOduration=137.6171501 podStartE2EDuration="2m17.6171501s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:54.575272302 +0000 UTC m=+161.175060151" watchObservedRunningTime="2026-01-20 03:51:54.6171501 +0000 UTC m=+161.216937959" Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.682987 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:54 crc kubenswrapper[4898]: E0120 03:51:54.683333 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:55.183298894 +0000 UTC m=+161.783086753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.774576 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ctlvr" podStartSLOduration=137.774559332 podStartE2EDuration="2m17.774559332s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:54.772325113 +0000 UTC m=+161.372112962" watchObservedRunningTime="2026-01-20 03:51:54.774559332 +0000 UTC m=+161.374347191" Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.785077 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:54 crc kubenswrapper[4898]: E0120 03:51:54.785539 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:55.285506399 +0000 UTC m=+161.885294258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.823988 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:51:54 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:51:54 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:51:54 crc kubenswrapper[4898]: healthz check failed Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.824103 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.838221 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tbzbw" event={"ID":"ccd714e4-5975-4306-bf59-a1542a08367b","Type":"ContainerStarted","Data":"00378fec1adc14ea06b76adebfcccec7753e24ebe69a7090d1ec1e9b535ecaba"} Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.838267 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tbzbw" event={"ID":"ccd714e4-5975-4306-bf59-a1542a08367b","Type":"ContainerStarted","Data":"23cf628e2883d11a93b5c69e0da9809362a003a98ae1ebb46044c9e6d0ca6787"} Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.839001 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tbzbw" Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.845787 4898 patch_prober.go:28] interesting pod/downloads-7954f5f757-tbzbw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.845883 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tbzbw" podUID="ccd714e4-5975-4306-bf59-a1542a08367b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.848723 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" event={"ID":"101a38cf-ed10-4c3f-b9b6-fe33e34bbd21","Type":"ContainerStarted","Data":"5ebea62364ebe8221b24063c41292016f3689660799cb00fa08fb94514e3a0c7"} Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.886574 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:54 crc kubenswrapper[4898]: E0120 03:51:54.887386 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:55.387372842 +0000 UTC m=+161.987160701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.892245 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" event={"ID":"766f27f8-ddbf-4cf7-909a-424958a89fe2","Type":"ContainerStarted","Data":"e41df4f04440de3d6652b8ff4faeb0b98546d399fe546656b82e7c2471670bce"} Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.892297 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" event={"ID":"766f27f8-ddbf-4cf7-909a-424958a89fe2","Type":"ContainerStarted","Data":"66950888e199866fb5ac6930b764a63d7d7312f5ca0aae291fcfcb5b2e04bd68"} Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.911501 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" event={"ID":"00fedf08-d9d4-43f5-96ff-3f705c050a96","Type":"ContainerStarted","Data":"e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30"} Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.925706 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z2lzs" event={"ID":"754daf08-c014-474b-9ec9-ca10b014b002","Type":"ContainerStarted","Data":"6f75041d1bea05b11b1b7ee5e87220f6bda0ebb16ecbb30e040865c294b2adce"} Jan 20 03:51:54 crc kubenswrapper[4898]: I0120 03:51:54.925773 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z2lzs" event={"ID":"754daf08-c014-474b-9ec9-ca10b014b002","Type":"ContainerStarted","Data":"8b4466e82fa096c4cfd6c7482f37dbb8c3dbcb2040fa0db56b8a6783a5837993"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.018656 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ctlvr" event={"ID":"480eb1b9-9ac2-4353-9216-751da9b33e4f","Type":"ContainerStarted","Data":"649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.034970 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:55 crc kubenswrapper[4898]: E0120 03:51:55.040303 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:55.540280396 +0000 UTC m=+162.140068255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.076139 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" event={"ID":"baaced9e-4d77-491b-8898-028c9925a5c2","Type":"ContainerStarted","Data":"bd36ffbf1d597ae9585330aecb303841474215159bf1e924cf97d110808559f2"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.099947 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" event={"ID":"a71ba0b6-92d4-4756-b286-f93ce475a236","Type":"ContainerStarted","Data":"5ffb3b1bbe53fd42a7b8fc3154ac12fed070150615cc5510a521abab95db0966"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.100015 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" event={"ID":"a71ba0b6-92d4-4756-b286-f93ce475a236","Type":"ContainerStarted","Data":"aaf5625bf72bb328b2f3b0dda2d093019817da6cabc6ed0d7277093ec37b19cf"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.112088 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vn6kr" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.118691 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" podStartSLOduration=138.118660517 podStartE2EDuration="2m18.118660517s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.115526691 +0000 UTC m=+161.715314540" watchObservedRunningTime="2026-01-20 03:51:55.118660517 +0000 UTC m=+161.718448376" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.118723 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" event={"ID":"9bbaa55a-3008-4dc1-bc39-460904964ec3","Type":"ContainerStarted","Data":"33e66a68f1cfd93fc67d0750ed44209d73625df21f4d036318d6a75edbdcb3f7"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.119444 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" event={"ID":"9bbaa55a-3008-4dc1-bc39-460904964ec3","Type":"ContainerStarted","Data":"e8abcaefb67ff8a9008a25e411958f3894fc9625e20fb114d4814b0c858419a6"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.119531 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.131020 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m85tn" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.136986 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:55 crc kubenswrapper[4898]: E0120 03:51:55.137671 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:55.637654871 +0000 UTC m=+162.237442730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.148964 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-g8rbm" event={"ID":"3a9b8d7c-e836-4661-856d-5a0e8276387e","Type":"ContainerStarted","Data":"a5038bb0af1ba77431a36a1c57a4b7afde7fb369e8ed4e48896d243346f924eb"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.170220 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" event={"ID":"a73f73a2-1335-45a7-867b-18585f1c0862","Type":"ContainerStarted","Data":"3a1a758f5bb760edb0adfaeba23f50e231bc7b6838d3b7f375abb81d9918bc88"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.184733 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bmbwr" event={"ID":"7d5893dc-b521-419f-afc7-07dd1aaac395","Type":"ContainerStarted","Data":"31c35b21481d54924025510d7412831b5ebe9dd2948be2e7f76bef85fc64891e"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.209636 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" event={"ID":"810f3b12-b157-4a65-becc-0490f489bcd9","Type":"ContainerStarted","Data":"769c7993b21ebd1cc882f31046a8305df2a8610739308d5743a9063416d087c8"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.210337 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" event={"ID":"810f3b12-b157-4a65-becc-0490f489bcd9","Type":"ContainerStarted","Data":"1208a237ab4c141ba10d3a91b59c5901f5ec6fbbc36296cf1adf1e59297317e1"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.211403 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.230166 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tbzbw" podStartSLOduration=138.230136216 podStartE2EDuration="2m18.230136216s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.227852856 +0000 UTC m=+161.827640715" watchObservedRunningTime="2026-01-20 03:51:55.230136216 +0000 UTC m=+161.829924075" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.231844 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" event={"ID":"a4852cac-3462-451a-b007-d9598c7acb67","Type":"ContainerStarted","Data":"7199b37cfb7ef8daca6aee51d75af2896d79817b0d3ef936e32b748ab978f12e"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.239149 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:55 crc kubenswrapper[4898]: E0120 03:51:55.239902 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:55.739869915 +0000 UTC m=+162.339657774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.249837 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" event={"ID":"f7ae319d-3396-4567-8cbf-d9d331d01be4","Type":"ContainerStarted","Data":"94d7e3c067f9aec6eed15f74dace259d92b4c0f68ae7488ed5091f2a83027683"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.256158 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qbgkv" podStartSLOduration=138.256135175 podStartE2EDuration="2m18.256135175s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.253893856 +0000 UTC m=+161.853681715" watchObservedRunningTime="2026-01-20 03:51:55.256135175 +0000 UTC m=+161.855923034" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.265632 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.275832 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-20 03:46:54 +0000 UTC, rotation deadline is 2026-11-14 04:27:35.333276862 +0000 UTC Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.275865 4898 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7152h35m40.05741523s for next certificate rotation Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.281016 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" event={"ID":"df739f36-70f7-4dd4-a86b-4aa6e65a3465","Type":"ContainerStarted","Data":"422444fced77db444dfd6ecc2ce9ae74951b450d852693530375ae9fb4457e38"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.281060 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" event={"ID":"df739f36-70f7-4dd4-a86b-4aa6e65a3465","Type":"ContainerStarted","Data":"46d326ddc8ac16b390b6129dff07b4eff27c77e1c69162bb8fa3eff71e5ea7e9"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.301323 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-z2lzs" podStartSLOduration=7.301305465 podStartE2EDuration="7.301305465s" podCreationTimestamp="2026-01-20 03:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.300448529 +0000 UTC m=+161.900236388" watchObservedRunningTime="2026-01-20 03:51:55.301305465 +0000 UTC m=+161.901093324" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.317614 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" event={"ID":"cdb22f54-0343-40c5-94d9-9a743e7b875c","Type":"ContainerStarted","Data":"988088f44fe16b0e99a0d2cf5aa5e6276a493efbfedc8e54c4c5c902ad7e66fe"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.341314 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.341849 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" event={"ID":"8c39e36d-f949-4ad3-ba7f-f4d2b9468a80","Type":"ContainerStarted","Data":"ab4fd3713ad6e11869529c3a34ec523724e6d086a2689aa11071459b8776c44e"} Jan 20 03:51:55 crc kubenswrapper[4898]: E0120 03:51:55.342214 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:55.842192163 +0000 UTC m=+162.441980022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.389777 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz" event={"ID":"7b2b0787-24b7-42e6-b0a6-86eaa18560a8","Type":"ContainerStarted","Data":"dfee51dadd0de2b8419561e2ee352065bfb47f750e66b52f91145ad193597f08"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.389830 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz" event={"ID":"7b2b0787-24b7-42e6-b0a6-86eaa18560a8","Type":"ContainerStarted","Data":"967b0faf4bfb736c83258492d578e250ea2c4ca00d8626db2d92570472cd8941"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.389841 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz" event={"ID":"7b2b0787-24b7-42e6-b0a6-86eaa18560a8","Type":"ContainerStarted","Data":"dc8f067fdffcdc78b469e77849360eb4523b572d9ce13fd6bf41050077d056b9"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.393420 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" podStartSLOduration=138.393406768 podStartE2EDuration="2m18.393406768s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.392844111 +0000 UTC m=+161.992631970" watchObservedRunningTime="2026-01-20 03:51:55.393406768 +0000 UTC m=+161.993194617" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.410354 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" event={"ID":"a54c7052-c047-4ef5-a201-796f444ad467","Type":"ContainerStarted","Data":"80d7e69ff9a894539bc12aa85375d1ad7e83e22f2289ee5fb0ffc04cd411e508"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.410439 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" event={"ID":"a54c7052-c047-4ef5-a201-796f444ad467","Type":"ContainerStarted","Data":"db51ea2589c3d24077f1f8450cc11a7ac5b131df08cbc0dfb19462f6927309b0"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.425620 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bn8jb" event={"ID":"a703a3d0-8ee9-4319-b2e0-0e0292eb8d98","Type":"ContainerStarted","Data":"e6a743b4189d146e4666a16047b8693867d173c0972999cec36d9107a86d7008"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.445794 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:55 crc kubenswrapper[4898]: E0120 03:51:55.447835 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:55.947815522 +0000 UTC m=+162.547603371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.447819 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" event={"ID":"1c2a3f52-4642-4c41-8dad-ac50db0c6763","Type":"ContainerStarted","Data":"dc183c0d8179e1a512a8f05ce44979e5a9aca683d66722e4717b5e3ba57a86d3"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.447922 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" event={"ID":"1c2a3f52-4642-4c41-8dad-ac50db0c6763","Type":"ContainerStarted","Data":"f052a77f8fbc72f08a626c77fa5230d2bad0ffbbd10be55023cd9ac1682db942"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.476037 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" event={"ID":"395759b1-2c0e-4592-9b92-afb458e31327","Type":"ContainerStarted","Data":"6c2c1f8c3619bed9fd296632e0344ed67e1fccf9e40d845e0597b4d976ca81ed"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.476115 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" event={"ID":"395759b1-2c0e-4592-9b92-afb458e31327","Type":"ContainerStarted","Data":"cf3d6094e70a9dc363d7673a094ffc469eab16868ec60d5d9c392bc82bf5d611"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.478720 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" podStartSLOduration=138.478706192 podStartE2EDuration="2m18.478706192s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.441170177 +0000 UTC m=+162.040958026" watchObservedRunningTime="2026-01-20 03:51:55.478706192 +0000 UTC m=+162.078494051" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.478895 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" podStartSLOduration=138.478891778 podStartE2EDuration="2m18.478891778s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.470701216 +0000 UTC m=+162.070489085" watchObservedRunningTime="2026-01-20 03:51:55.478891778 +0000 UTC m=+162.078679637" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.509739 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" event={"ID":"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23","Type":"ContainerStarted","Data":"139eeb13adea1d654f9bf8de5b69b638187e4332f21b5965ef30709e5433cef2"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.522852 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" event={"ID":"7cb386f8-d968-4790-b003-48452b55487c","Type":"ContainerStarted","Data":"61670d47e193481b7a2f46fc650a8c133877a04f8630583848c9496457484a5b"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.522894 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" event={"ID":"7cb386f8-d968-4790-b003-48452b55487c","Type":"ContainerStarted","Data":"fbb7ede672dce9d2dc88c65ecc41b89deb00f64ab8677c71e78d5336edaa857e"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.530314 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" event={"ID":"c3ae44e5-109c-4893-968a-84304c3edcfb","Type":"ContainerStarted","Data":"ff863daa7ca7464b0395ebbcb16ff2cbb688b55d632641c02e358d31c2746f2f"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.530345 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" event={"ID":"c3ae44e5-109c-4893-968a-84304c3edcfb","Type":"ContainerStarted","Data":"1a02324db801e2910cf7c4ee51a1fd25c5db836b41f6db041df6837a2d4e044f"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.541169 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vknkf" podStartSLOduration=137.541154193 podStartE2EDuration="2m17.541154193s" podCreationTimestamp="2026-01-20 03:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.540784582 +0000 UTC m=+162.140572441" watchObservedRunningTime="2026-01-20 03:51:55.541154193 +0000 UTC m=+162.140942052" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.547953 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:55 crc kubenswrapper[4898]: E0120 03:51:55.549074 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:56.049059176 +0000 UTC m=+162.648847035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.552341 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2xvbf" event={"ID":"ce8bbf7a-5333-4080-bd34-d2c183b87d5e","Type":"ContainerStarted","Data":"25be4acc5aa2f710db116325eb487c5259642ab31ec74de78ec1dc4305e18657"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.552382 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2xvbf" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.552398 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2xvbf" event={"ID":"ce8bbf7a-5333-4080-bd34-d2c183b87d5e","Type":"ContainerStarted","Data":"8a4c48861880c690dcb7d6e7a51c6c01ce4e8e4d14b4d46a45f60145dc20b3b7"} Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.566914 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.567582 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.573689 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4wb66" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.617823 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" podStartSLOduration=138.617804501 podStartE2EDuration="2m18.617804501s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.589330235 +0000 UTC m=+162.189118094" watchObservedRunningTime="2026-01-20 03:51:55.617804501 +0000 UTC m=+162.217592360" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.618604 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gwsf2" podStartSLOduration=138.618598495 podStartE2EDuration="2m18.618598495s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.61614643 +0000 UTC m=+162.215934279" watchObservedRunningTime="2026-01-20 03:51:55.618598495 +0000 UTC m=+162.218386354" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.653716 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:55 crc kubenswrapper[4898]: E0120 03:51:55.658880 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:56.158862984 +0000 UTC m=+162.758650843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.692954 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4mccz" podStartSLOduration=138.692934541 podStartE2EDuration="2m18.692934541s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.651767045 +0000 UTC m=+162.251554904" watchObservedRunningTime="2026-01-20 03:51:55.692934541 +0000 UTC m=+162.292722400" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.729127 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hp6hs" podStartSLOduration=138.729101935 podStartE2EDuration="2m18.729101935s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.696116849 +0000 UTC m=+162.295904718" watchObservedRunningTime="2026-01-20 03:51:55.729101935 +0000 UTC m=+162.328889804" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.755490 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kfxts" podStartSLOduration=138.755469165 podStartE2EDuration="2m18.755469165s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.75365254 +0000 UTC m=+162.353440409" watchObservedRunningTime="2026-01-20 03:51:55.755469165 +0000 UTC m=+162.355257024" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.756240 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" podStartSLOduration=138.756235399 podStartE2EDuration="2m18.756235399s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.728407193 +0000 UTC m=+162.328195052" watchObservedRunningTime="2026-01-20 03:51:55.756235399 +0000 UTC m=+162.356023258" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.756751 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:55 crc kubenswrapper[4898]: E0120 03:51:55.756849 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:56.256823818 +0000 UTC m=+162.856611677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.766196 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:55 crc kubenswrapper[4898]: E0120 03:51:55.769592 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:56.269570849 +0000 UTC m=+162.869358708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.792732 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8sdnz" podStartSLOduration=138.792708081 podStartE2EDuration="2m18.792708081s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.785787928 +0000 UTC m=+162.385575787" watchObservedRunningTime="2026-01-20 03:51:55.792708081 +0000 UTC m=+162.392495930" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.843667 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:51:55 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:51:55 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:51:55 crc kubenswrapper[4898]: healthz check failed Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.844505 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.861595 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2xvbf" podStartSLOduration=7.861570669 podStartE2EDuration="7.861570669s" podCreationTimestamp="2026-01-20 03:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.829007918 +0000 UTC m=+162.428795777" watchObservedRunningTime="2026-01-20 03:51:55.861570669 +0000 UTC m=+162.461358528" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.870101 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:55 crc kubenswrapper[4898]: E0120 03:51:55.871663 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:56.371640159 +0000 UTC m=+162.971428018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.922653 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kskjk" podStartSLOduration=138.922632527 podStartE2EDuration="2m18.922632527s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.864266392 +0000 UTC m=+162.464054271" watchObservedRunningTime="2026-01-20 03:51:55.922632527 +0000 UTC m=+162.522420386" Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.972411 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:55 crc kubenswrapper[4898]: E0120 03:51:55.972724 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:56.472712109 +0000 UTC m=+163.072499968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:55 crc kubenswrapper[4898]: I0120 03:51:55.998329 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lh4fv" podStartSLOduration=137.998310955 podStartE2EDuration="2m17.998310955s" podCreationTimestamp="2026-01-20 03:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:55.997402038 +0000 UTC m=+162.597189897" watchObservedRunningTime="2026-01-20 03:51:55.998310955 +0000 UTC m=+162.598098814" Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.075986 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:56 crc kubenswrapper[4898]: E0120 03:51:56.076281 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:56.576266184 +0000 UTC m=+163.176054043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.089069 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qr7t" podStartSLOduration=139.089053527 podStartE2EDuration="2m19.089053527s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:56.087818728 +0000 UTC m=+162.687606587" watchObservedRunningTime="2026-01-20 03:51:56.089053527 +0000 UTC m=+162.688841386" Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.179240 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:56 crc kubenswrapper[4898]: E0120 03:51:56.179578 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:56.679566791 +0000 UTC m=+163.279354650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.280594 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:56 crc kubenswrapper[4898]: E0120 03:51:56.280969 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:56.780952029 +0000 UTC m=+163.380739888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.382378 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:56 crc kubenswrapper[4898]: E0120 03:51:56.382794 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:56.882749991 +0000 UTC m=+163.482537850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.465475 4898 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.483799 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:56 crc kubenswrapper[4898]: E0120 03:51:56.484298 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:56.984279044 +0000 UTC m=+163.584066903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.556865 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2xvbf" event={"ID":"ce8bbf7a-5333-4080-bd34-d2c183b87d5e","Type":"ContainerStarted","Data":"dacb0eef83280620403eb2ea9af58a2973992b4b9110c54686e0f90efb51b45c"} Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.558618 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" event={"ID":"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23","Type":"ContainerStarted","Data":"7eebd7e8892e0bc9e3c29d9d0d52954c91e7ab8e79ee49b3f02e6cb6c723ca08"} Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.560249 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hv5vs" event={"ID":"a4852cac-3462-451a-b007-d9598c7acb67","Type":"ContainerStarted","Data":"dd98f55c43651094ced857eab35bd3115e19476d902596907de844cfccd5087a"} Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.561655 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" event={"ID":"766f27f8-ddbf-4cf7-909a-424958a89fe2","Type":"ContainerStarted","Data":"06a4fc2964f4d0f56a89ed0d7ab2f5aa04c9a35ef85c40f67acac569c8ff51d1"} Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.562504 4898 patch_prober.go:28] interesting pod/downloads-7954f5f757-tbzbw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.562538 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tbzbw" podUID="ccd714e4-5975-4306-bf59-a1542a08367b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.584999 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:56 crc kubenswrapper[4898]: E0120 03:51:56.585415 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:57.085398615 +0000 UTC m=+163.685186474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.589408 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b2j6r" podStartSLOduration=139.589375087 podStartE2EDuration="2m19.589375087s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:56.587648954 +0000 UTC m=+163.187436813" watchObservedRunningTime="2026-01-20 03:51:56.589375087 +0000 UTC m=+163.189162946" Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.686940 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:56 crc kubenswrapper[4898]: E0120 03:51:56.687152 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:57.187118543 +0000 UTC m=+163.786906402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.691100 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:56 crc kubenswrapper[4898]: E0120 03:51:56.694470 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:57.194444689 +0000 UTC m=+163.794232548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.794795 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:56 crc kubenswrapper[4898]: E0120 03:51:56.795478 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 03:51:57.295462147 +0000 UTC m=+163.895250006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.835043 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:51:56 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:51:56 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:51:56 crc kubenswrapper[4898]: healthz check failed Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.835126 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.896513 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:56 crc kubenswrapper[4898]: E0120 03:51:56.896955 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 03:51:57.396939508 +0000 UTC m=+163.996727367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bpvpw" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.896941 4898 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-20T03:51:56.465506297Z","Handler":null,"Name":""} Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.899996 4898 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.900031 4898 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.940140 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k45j4"] Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.941735 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.943841 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.950313 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k45j4"] Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.997830 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.998320 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-utilities\") pod \"community-operators-k45j4\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.998404 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-catalog-content\") pod \"community-operators-k45j4\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:51:56 crc kubenswrapper[4898]: I0120 03:51:56.998425 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rhf7\" (UniqueName: \"kubernetes.io/projected/1b568186-dd5f-4340-9b0b-f083bf37a1b5-kube-api-access-9rhf7\") pod \"community-operators-k45j4\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.003654 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.099275 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-catalog-content\") pod \"community-operators-k45j4\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.099324 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rhf7\" (UniqueName: \"kubernetes.io/projected/1b568186-dd5f-4340-9b0b-f083bf37a1b5-kube-api-access-9rhf7\") pod \"community-operators-k45j4\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.099382 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.099449 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-utilities\") pod \"community-operators-k45j4\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.100067 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-catalog-content\") pod \"community-operators-k45j4\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.100115 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-utilities\") pod \"community-operators-k45j4\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.103108 4898 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.103149 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.123149 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rhf7\" (UniqueName: \"kubernetes.io/projected/1b568186-dd5f-4340-9b0b-f083bf37a1b5-kube-api-access-9rhf7\") pod \"community-operators-k45j4\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.143470 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bvccs"] Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.144381 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.146393 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.159369 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bvccs"] Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.244288 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bpvpw\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.254041 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.302148 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjb9d\" (UniqueName: \"kubernetes.io/projected/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-kube-api-access-kjb9d\") pod \"certified-operators-bvccs\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.302194 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-catalog-content\") pod \"certified-operators-bvccs\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.302220 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-utilities\") pod \"certified-operators-bvccs\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.335031 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-74j5p"] Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.335924 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.345222 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-74j5p"] Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.357176 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.403044 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjb9d\" (UniqueName: \"kubernetes.io/projected/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-kube-api-access-kjb9d\") pod \"certified-operators-bvccs\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.403096 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-catalog-content\") pod \"certified-operators-bvccs\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.403125 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8skv\" (UniqueName: \"kubernetes.io/projected/46ecbb04-c29f-45a2-88aa-1d58b4d44820-kube-api-access-t8skv\") pod \"community-operators-74j5p\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.403156 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-catalog-content\") pod \"community-operators-74j5p\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.403183 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-utilities\") pod \"certified-operators-bvccs\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.403208 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-utilities\") pod \"community-operators-74j5p\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.403828 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-utilities\") pod \"certified-operators-bvccs\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.403923 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-catalog-content\") pod \"certified-operators-bvccs\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.429861 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjb9d\" (UniqueName: \"kubernetes.io/projected/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-kube-api-access-kjb9d\") pod \"certified-operators-bvccs\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.456098 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.504459 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8skv\" (UniqueName: \"kubernetes.io/projected/46ecbb04-c29f-45a2-88aa-1d58b4d44820-kube-api-access-t8skv\") pod \"community-operators-74j5p\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.504499 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-catalog-content\") pod \"community-operators-74j5p\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.504521 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-utilities\") pod \"community-operators-74j5p\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.504938 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-utilities\") pod \"community-operators-74j5p\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.505767 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-catalog-content\") pod \"community-operators-74j5p\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.513313 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k45j4"] Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.532392 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8skv\" (UniqueName: \"kubernetes.io/projected/46ecbb04-c29f-45a2-88aa-1d58b4d44820-kube-api-access-t8skv\") pod \"community-operators-74j5p\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.537082 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ss85s"] Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.538014 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.550366 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ss85s"] Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.586519 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k45j4" event={"ID":"1b568186-dd5f-4340-9b0b-f083bf37a1b5","Type":"ContainerStarted","Data":"8349868cb7a6e53dd2a5b982ed37811e260f8eaabfc733d5c52b4a5dcedf824d"} Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.589378 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" event={"ID":"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23","Type":"ContainerStarted","Data":"24faac95b9d89d12dc339d5d96e775be595c485ed8d5e51eb4ed71790e2e6215"} Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.589406 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" event={"ID":"0e1ecdb2-e05e-42ad-8b1e-b1805600ab23","Type":"ContainerStarted","Data":"972c5af9a50e52ea2de32e62f4d27b1528733579fd4f5b3d6619816dba44f974"} Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.593097 4898 patch_prober.go:28] interesting pod/downloads-7954f5f757-tbzbw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.593133 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tbzbw" podUID="ccd714e4-5975-4306-bf59-a1542a08367b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.609316 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4xpjs" podStartSLOduration=9.609296151 podStartE2EDuration="9.609296151s" podCreationTimestamp="2026-01-20 03:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:57.60700012 +0000 UTC m=+164.206787979" watchObservedRunningTime="2026-01-20 03:51:57.609296151 +0000 UTC m=+164.209084010" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.655505 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.673245 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bpvpw"] Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.708107 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-utilities\") pod \"certified-operators-ss85s\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.708150 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv77s\" (UniqueName: \"kubernetes.io/projected/63bc00c4-5532-4daa-9e22-b7bc5424035d-kube-api-access-vv77s\") pod \"certified-operators-ss85s\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.708402 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-catalog-content\") pod \"certified-operators-ss85s\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.759115 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.759766 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bvccs"] Jan 20 03:51:57 crc kubenswrapper[4898]: W0120 03:51:57.784119 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eab3b38_2b5e_4ab8_8660_a45f19b1d329.slice/crio-908ae6b6a4976880f747a798cd3d1a368d8aa58f34a95402f8c3f8a975e7e77f WatchSource:0}: Error finding container 908ae6b6a4976880f747a798cd3d1a368d8aa58f34a95402f8c3f8a975e7e77f: Status 404 returned error can't find the container with id 908ae6b6a4976880f747a798cd3d1a368d8aa58f34a95402f8c3f8a975e7e77f Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.811624 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-utilities\") pod \"certified-operators-ss85s\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.811668 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv77s\" (UniqueName: \"kubernetes.io/projected/63bc00c4-5532-4daa-9e22-b7bc5424035d-kube-api-access-vv77s\") pod \"certified-operators-ss85s\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.811729 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-catalog-content\") pod \"certified-operators-ss85s\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.812136 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-catalog-content\") pod \"certified-operators-ss85s\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.812369 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-utilities\") pod \"certified-operators-ss85s\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.823266 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:51:57 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:51:57 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:51:57 crc kubenswrapper[4898]: healthz check failed Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.823307 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.832121 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv77s\" (UniqueName: \"kubernetes.io/projected/63bc00c4-5532-4daa-9e22-b7bc5424035d-kube-api-access-vv77s\") pod \"certified-operators-ss85s\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.863371 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.920781 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.921636 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.925532 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.925851 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.941851 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 03:51:57 crc kubenswrapper[4898]: I0120 03:51:57.944767 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-74j5p"] Jan 20 03:51:57 crc kubenswrapper[4898]: W0120 03:51:57.974909 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46ecbb04_c29f_45a2_88aa_1d58b4d44820.slice/crio-e35082825b756e6cc9513cdffecf53f50cf697d3d24e03ea4a0062a715e5683a WatchSource:0}: Error finding container e35082825b756e6cc9513cdffecf53f50cf697d3d24e03ea4a0062a715e5683a: Status 404 returned error can't find the container with id e35082825b756e6cc9513cdffecf53f50cf697d3d24e03ea4a0062a715e5683a Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.022623 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"af44cc46-eb54-4cd4-8161-d1b522d7c41a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.022735 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"af44cc46-eb54-4cd4-8161-d1b522d7c41a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.113898 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ss85s"] Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.123618 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"af44cc46-eb54-4cd4-8161-d1b522d7c41a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.123728 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"af44cc46-eb54-4cd4-8161-d1b522d7c41a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.123770 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"af44cc46-eb54-4cd4-8161-d1b522d7c41a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.144971 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"af44cc46-eb54-4cd4-8161-d1b522d7c41a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 03:51:58 crc kubenswrapper[4898]: W0120 03:51:58.147020 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63bc00c4_5532_4daa_9e22_b7bc5424035d.slice/crio-37a0dc87d0f8f3577936f99bd50e1552132d29055201f0cd68f141fd54e25d9a WatchSource:0}: Error finding container 37a0dc87d0f8f3577936f99bd50e1552132d29055201f0cd68f141fd54e25d9a: Status 404 returned error can't find the container with id 37a0dc87d0f8f3577936f99bd50e1552132d29055201f0cd68f141fd54e25d9a Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.276662 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.470825 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 03:51:58 crc kubenswrapper[4898]: W0120 03:51:58.480900 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaf44cc46_eb54_4cd4_8161_d1b522d7c41a.slice/crio-29ecb343eada961f7f0e2299da8787e1c9e54ed6f50926b928d06b8679bbf50e WatchSource:0}: Error finding container 29ecb343eada961f7f0e2299da8787e1c9e54ed6f50926b928d06b8679bbf50e: Status 404 returned error can't find the container with id 29ecb343eada961f7f0e2299da8787e1c9e54ed6f50926b928d06b8679bbf50e Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.597523 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" event={"ID":"9941fb67-6521-471d-8034-3cb2f695ee40","Type":"ContainerStarted","Data":"ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b"} Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.598121 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" event={"ID":"9941fb67-6521-471d-8034-3cb2f695ee40","Type":"ContainerStarted","Data":"2a4087f6253cf405c4420dc7172c2bf67b180eb8eec8103a9f1dbee1f263b1e0"} Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.599746 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.602278 4898 generic.go:334] "Generic (PLEG): container finished" podID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" containerID="4f374f9072e325bd41ff945b7d2648739dacaf354de7f6f450473cb3c336c5be" exitCode=0 Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.602715 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvccs" event={"ID":"2eab3b38-2b5e-4ab8-8660-a45f19b1d329","Type":"ContainerDied","Data":"4f374f9072e325bd41ff945b7d2648739dacaf354de7f6f450473cb3c336c5be"} Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.602734 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvccs" event={"ID":"2eab3b38-2b5e-4ab8-8660-a45f19b1d329","Type":"ContainerStarted","Data":"908ae6b6a4976880f747a798cd3d1a368d8aa58f34a95402f8c3f8a975e7e77f"} Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.603575 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.604322 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"af44cc46-eb54-4cd4-8161-d1b522d7c41a","Type":"ContainerStarted","Data":"29ecb343eada961f7f0e2299da8787e1c9e54ed6f50926b928d06b8679bbf50e"} Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.605620 4898 generic.go:334] "Generic (PLEG): container finished" podID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" containerID="0ea006117be734c07bbcf9760854f32e5e216c07968c58c4f6d4fd0812264cef" exitCode=0 Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.605692 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74j5p" event={"ID":"46ecbb04-c29f-45a2-88aa-1d58b4d44820","Type":"ContainerDied","Data":"0ea006117be734c07bbcf9760854f32e5e216c07968c58c4f6d4fd0812264cef"} Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.605720 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74j5p" event={"ID":"46ecbb04-c29f-45a2-88aa-1d58b4d44820","Type":"ContainerStarted","Data":"e35082825b756e6cc9513cdffecf53f50cf697d3d24e03ea4a0062a715e5683a"} Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.607296 4898 generic.go:334] "Generic (PLEG): container finished" podID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" containerID="d813620cbb66daac6097afc9cb5ea706dd9c384fe58e599b6d7973d8b633a28a" exitCode=0 Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.607361 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k45j4" event={"ID":"1b568186-dd5f-4340-9b0b-f083bf37a1b5","Type":"ContainerDied","Data":"d813620cbb66daac6097afc9cb5ea706dd9c384fe58e599b6d7973d8b633a28a"} Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.608518 4898 generic.go:334] "Generic (PLEG): container finished" podID="63bc00c4-5532-4daa-9e22-b7bc5424035d" containerID="4a55e45012aa85bc3e66c8bcc3844fdc4a93dfecf419588d3f475174e84fd8a5" exitCode=0 Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.609224 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss85s" event={"ID":"63bc00c4-5532-4daa-9e22-b7bc5424035d","Type":"ContainerDied","Data":"4a55e45012aa85bc3e66c8bcc3844fdc4a93dfecf419588d3f475174e84fd8a5"} Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.609239 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss85s" event={"ID":"63bc00c4-5532-4daa-9e22-b7bc5424035d","Type":"ContainerStarted","Data":"37a0dc87d0f8f3577936f99bd50e1552132d29055201f0cd68f141fd54e25d9a"} Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.651300 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" podStartSLOduration=141.651282273 podStartE2EDuration="2m21.651282273s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:58.629038759 +0000 UTC m=+165.228826648" watchObservedRunningTime="2026-01-20 03:51:58.651282273 +0000 UTC m=+165.251070132" Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.822156 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:51:58 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:51:58 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:51:58 crc kubenswrapper[4898]: healthz check failed Jan 20 03:51:58 crc kubenswrapper[4898]: I0120 03:51:58.822211 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.145007 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qgrfj"] Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.146321 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.149927 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.162257 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgrfj"] Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.236749 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk7bt\" (UniqueName: \"kubernetes.io/projected/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-kube-api-access-sk7bt\") pod \"redhat-marketplace-qgrfj\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.236804 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-utilities\") pod \"redhat-marketplace-qgrfj\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.236843 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-catalog-content\") pod \"redhat-marketplace-qgrfj\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.338549 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-utilities\") pod \"redhat-marketplace-qgrfj\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.340127 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-utilities\") pod \"redhat-marketplace-qgrfj\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.340176 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-catalog-content\") pod \"redhat-marketplace-qgrfj\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.340296 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk7bt\" (UniqueName: \"kubernetes.io/projected/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-kube-api-access-sk7bt\") pod \"redhat-marketplace-qgrfj\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.340850 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-catalog-content\") pod \"redhat-marketplace-qgrfj\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.366040 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk7bt\" (UniqueName: \"kubernetes.io/projected/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-kube-api-access-sk7bt\") pod \"redhat-marketplace-qgrfj\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.493378 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.539273 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wt5nd"] Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.540891 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.549461 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt5nd"] Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.618644 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"af44cc46-eb54-4cd4-8161-d1b522d7c41a","Type":"ContainerStarted","Data":"a6fb49f7182f19fed845a408b93d2cd3d1b74dc31c932027062f260eea463160"} Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.644463 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-utilities\") pod \"redhat-marketplace-wt5nd\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.644590 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvbj2\" (UniqueName: \"kubernetes.io/projected/36ab491e-141c-4810-8d67-e31de85498c9-kube-api-access-kvbj2\") pod \"redhat-marketplace-wt5nd\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.644625 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-catalog-content\") pod \"redhat-marketplace-wt5nd\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.746049 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-catalog-content\") pod \"redhat-marketplace-wt5nd\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.746195 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-utilities\") pod \"redhat-marketplace-wt5nd\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.746613 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-catalog-content\") pod \"redhat-marketplace-wt5nd\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.747326 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-utilities\") pod \"redhat-marketplace-wt5nd\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.747351 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvbj2\" (UniqueName: \"kubernetes.io/projected/36ab491e-141c-4810-8d67-e31de85498c9-kube-api-access-kvbj2\") pod \"redhat-marketplace-wt5nd\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.772104 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvbj2\" (UniqueName: \"kubernetes.io/projected/36ab491e-141c-4810-8d67-e31de85498c9-kube-api-access-kvbj2\") pod \"redhat-marketplace-wt5nd\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.823253 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:51:59 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:51:59 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:51:59 crc kubenswrapper[4898]: healthz check failed Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.823851 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.873321 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.946554 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.946537686 podStartE2EDuration="2.946537686s" podCreationTimestamp="2026-01-20 03:51:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:51:59.64235489 +0000 UTC m=+166.242142759" watchObservedRunningTime="2026-01-20 03:51:59.946537686 +0000 UTC m=+166.546325545" Jan 20 03:51:59 crc kubenswrapper[4898]: I0120 03:51:59.949254 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgrfj"] Jan 20 03:51:59 crc kubenswrapper[4898]: W0120 03:51:59.976345 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70d524b5_855e_4dda_aaa8_5ae9463e7b3c.slice/crio-3f9391918fffd4d517d5d9a8cc29d2a79d398454ab110408f593ea080fe12bd2 WatchSource:0}: Error finding container 3f9391918fffd4d517d5d9a8cc29d2a79d398454ab110408f593ea080fe12bd2: Status 404 returned error can't find the container with id 3f9391918fffd4d517d5d9a8cc29d2a79d398454ab110408f593ea080fe12bd2 Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.054087 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.054142 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.063326 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.141552 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9m8f"] Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.152172 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.152930 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9m8f"] Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.155304 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.155322 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.161916 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e93f051c-f83c-4d27-a695-dd5a33e979f4-metrics-certs\") pod \"network-metrics-daemon-5hkf9\" (UID: \"e93f051c-f83c-4d27-a695-dd5a33e979f4\") " pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.256901 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p4k7\" (UniqueName: \"kubernetes.io/projected/f0c2a49d-68aa-428a-87dd-fc3cddb41040-kube-api-access-4p4k7\") pod \"redhat-operators-f9m8f\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.257330 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-catalog-content\") pod \"redhat-operators-f9m8f\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.257368 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-utilities\") pod \"redhat-operators-f9m8f\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.278164 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.278249 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.284717 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.330728 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt5nd"] Jan 20 03:52:00 crc kubenswrapper[4898]: W0120 03:52:00.342421 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36ab491e_141c_4810_8d67_e31de85498c9.slice/crio-de6a2230c8e08158d9b7873764d8ec814fb407c6a035f00b83b8a7335a2ce07d WatchSource:0}: Error finding container de6a2230c8e08158d9b7873764d8ec814fb407c6a035f00b83b8a7335a2ce07d: Status 404 returned error can't find the container with id de6a2230c8e08158d9b7873764d8ec814fb407c6a035f00b83b8a7335a2ce07d Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.358682 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-catalog-content\") pod \"redhat-operators-f9m8f\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.358753 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-utilities\") pod \"redhat-operators-f9m8f\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.358842 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p4k7\" (UniqueName: \"kubernetes.io/projected/f0c2a49d-68aa-428a-87dd-fc3cddb41040-kube-api-access-4p4k7\") pod \"redhat-operators-f9m8f\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.360044 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-catalog-content\") pod \"redhat-operators-f9m8f\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.360996 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-utilities\") pod \"redhat-operators-f9m8f\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.377983 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p4k7\" (UniqueName: \"kubernetes.io/projected/f0c2a49d-68aa-428a-87dd-fc3cddb41040-kube-api-access-4p4k7\") pod \"redhat-operators-f9m8f\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.444481 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hkf9" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.536677 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.550896 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r4r2t"] Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.552488 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.562023 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r4r2t"] Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.630657 4898 generic.go:334] "Generic (PLEG): container finished" podID="af44cc46-eb54-4cd4-8161-d1b522d7c41a" containerID="a6fb49f7182f19fed845a408b93d2cd3d1b74dc31c932027062f260eea463160" exitCode=0 Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.630742 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"af44cc46-eb54-4cd4-8161-d1b522d7c41a","Type":"ContainerDied","Data":"a6fb49f7182f19fed845a408b93d2cd3d1b74dc31c932027062f260eea463160"} Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.641137 4898 generic.go:334] "Generic (PLEG): container finished" podID="36ab491e-141c-4810-8d67-e31de85498c9" containerID="52b124f73d48cd46c71d945bddfa17108207fdc5bab61c1d30235b0328780096" exitCode=0 Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.642063 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt5nd" event={"ID":"36ab491e-141c-4810-8d67-e31de85498c9","Type":"ContainerDied","Data":"52b124f73d48cd46c71d945bddfa17108207fdc5bab61c1d30235b0328780096"} Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.642087 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt5nd" event={"ID":"36ab491e-141c-4810-8d67-e31de85498c9","Type":"ContainerStarted","Data":"de6a2230c8e08158d9b7873764d8ec814fb407c6a035f00b83b8a7335a2ce07d"} Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.652568 4898 generic.go:334] "Generic (PLEG): container finished" podID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerID="8a2cb5d9e8809ccfa895be9f9a400d611da4814f620650de1408f90723a0ddbc" exitCode=0 Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.654370 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgrfj" event={"ID":"70d524b5-855e-4dda-aaa8-5ae9463e7b3c","Type":"ContainerDied","Data":"8a2cb5d9e8809ccfa895be9f9a400d611da4814f620650de1408f90723a0ddbc"} Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.654394 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgrfj" event={"ID":"70d524b5-855e-4dda-aaa8-5ae9463e7b3c","Type":"ContainerStarted","Data":"3f9391918fffd4d517d5d9a8cc29d2a79d398454ab110408f593ea080fe12bd2"} Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.660574 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hd2t9" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.662299 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-utilities\") pod \"redhat-operators-r4r2t\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.662362 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-catalog-content\") pod \"redhat-operators-r4r2t\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.662393 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khjrn\" (UniqueName: \"kubernetes.io/projected/e5d08231-f987-45cc-ac85-683f31f6e616-kube-api-access-khjrn\") pod \"redhat-operators-r4r2t\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.669875 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rk7bd" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.765335 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-catalog-content\") pod \"redhat-operators-r4r2t\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.765473 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khjrn\" (UniqueName: \"kubernetes.io/projected/e5d08231-f987-45cc-ac85-683f31f6e616-kube-api-access-khjrn\") pod \"redhat-operators-r4r2t\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.765610 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-utilities\") pod \"redhat-operators-r4r2t\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.766564 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-catalog-content\") pod \"redhat-operators-r4r2t\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.768275 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-utilities\") pod \"redhat-operators-r4r2t\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.802813 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khjrn\" (UniqueName: \"kubernetes.io/projected/e5d08231-f987-45cc-ac85-683f31f6e616-kube-api-access-khjrn\") pod \"redhat-operators-r4r2t\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.824487 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.846971 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:52:00 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:52:00 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:52:00 crc kubenswrapper[4898]: healthz check failed Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.847056 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.912014 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5hkf9"] Jan 20 03:52:00 crc kubenswrapper[4898]: I0120 03:52:00.938865 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:52:00 crc kubenswrapper[4898]: W0120 03:52:00.960008 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode93f051c_f83c_4d27_a695_dd5a33e979f4.slice/crio-9f0109859125f876709ff5e66a23905ad7cd4f2e31d6658cae481cdad386c4b5 WatchSource:0}: Error finding container 9f0109859125f876709ff5e66a23905ad7cd4f2e31d6658cae481cdad386c4b5: Status 404 returned error can't find the container with id 9f0109859125f876709ff5e66a23905ad7cd4f2e31d6658cae481cdad386c4b5 Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.078825 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.079040 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.083360 4898 patch_prober.go:28] interesting pod/console-f9d7485db-ctlvr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.083440 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ctlvr" podUID="480eb1b9-9ac2-4353-9216-751da9b33e4f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.093078 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tbzbw" Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.245228 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9m8f"] Jan 20 03:52:01 crc kubenswrapper[4898]: W0120 03:52:01.270439 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0c2a49d_68aa_428a_87dd_fc3cddb41040.slice/crio-7ac2dcc206880ebdc70b118b1c60c05506b88b48170b09b972d2463f7fb2eb32 WatchSource:0}: Error finding container 7ac2dcc206880ebdc70b118b1c60c05506b88b48170b09b972d2463f7fb2eb32: Status 404 returned error can't find the container with id 7ac2dcc206880ebdc70b118b1c60c05506b88b48170b09b972d2463f7fb2eb32 Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.377496 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r4r2t"] Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.688716 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4r2t" event={"ID":"e5d08231-f987-45cc-ac85-683f31f6e616","Type":"ContainerStarted","Data":"ca6bd4de4e6200251b74a40431e7332e5ffd6886dd0e06effadbba03f015648a"} Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.701252 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" event={"ID":"e93f051c-f83c-4d27-a695-dd5a33e979f4","Type":"ContainerStarted","Data":"e846b9f9cef51bb0072a54f22db8504cbd3e682e508c55823eb6a5cbce934aa2"} Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.701315 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" event={"ID":"e93f051c-f83c-4d27-a695-dd5a33e979f4","Type":"ContainerStarted","Data":"9f0109859125f876709ff5e66a23905ad7cd4f2e31d6658cae481cdad386c4b5"} Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.709516 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9m8f" event={"ID":"f0c2a49d-68aa-428a-87dd-fc3cddb41040","Type":"ContainerStarted","Data":"7ac2dcc206880ebdc70b118b1c60c05506b88b48170b09b972d2463f7fb2eb32"} Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.833571 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:52:01 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:52:01 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:52:01 crc kubenswrapper[4898]: healthz check failed Jan 20 03:52:01 crc kubenswrapper[4898]: I0120 03:52:01.834032 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.052184 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.117776 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kubelet-dir\") pod \"af44cc46-eb54-4cd4-8161-d1b522d7c41a\" (UID: \"af44cc46-eb54-4cd4-8161-d1b522d7c41a\") " Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.117885 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kube-api-access\") pod \"af44cc46-eb54-4cd4-8161-d1b522d7c41a\" (UID: \"af44cc46-eb54-4cd4-8161-d1b522d7c41a\") " Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.118187 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af44cc46-eb54-4cd4-8161-d1b522d7c41a" (UID: "af44cc46-eb54-4cd4-8161-d1b522d7c41a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.118585 4898 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.131787 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af44cc46-eb54-4cd4-8161-d1b522d7c41a" (UID: "af44cc46-eb54-4cd4-8161-d1b522d7c41a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.219363 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af44cc46-eb54-4cd4-8161-d1b522d7c41a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.731611 4898 generic.go:334] "Generic (PLEG): container finished" podID="e5d08231-f987-45cc-ac85-683f31f6e616" containerID="b35dbf2679cfe1ef1f0b853a16fb9949bd62aeaae8c989809dda0ea56e6a25e0" exitCode=0 Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.731684 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4r2t" event={"ID":"e5d08231-f987-45cc-ac85-683f31f6e616","Type":"ContainerDied","Data":"b35dbf2679cfe1ef1f0b853a16fb9949bd62aeaae8c989809dda0ea56e6a25e0"} Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.747286 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.747367 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"af44cc46-eb54-4cd4-8161-d1b522d7c41a","Type":"ContainerDied","Data":"29ecb343eada961f7f0e2299da8787e1c9e54ed6f50926b928d06b8679bbf50e"} Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.747399 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29ecb343eada961f7f0e2299da8787e1c9e54ed6f50926b928d06b8679bbf50e" Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.749619 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hkf9" event={"ID":"e93f051c-f83c-4d27-a695-dd5a33e979f4","Type":"ContainerStarted","Data":"c459e5d8cebc69acff59a6554c9968e48bad43c92b72446bdca9b0a4677d4113"} Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.755696 4898 generic.go:334] "Generic (PLEG): container finished" podID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerID="881f1d78b417f196189fc804ba3e9d31dcf76fba4901b4a4010e815cbee05e8a" exitCode=0 Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.755738 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9m8f" event={"ID":"f0c2a49d-68aa-428a-87dd-fc3cddb41040","Type":"ContainerDied","Data":"881f1d78b417f196189fc804ba3e9d31dcf76fba4901b4a4010e815cbee05e8a"} Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.763297 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5hkf9" podStartSLOduration=145.763284502 podStartE2EDuration="2m25.763284502s" podCreationTimestamp="2026-01-20 03:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:52:02.761505828 +0000 UTC m=+169.361293687" watchObservedRunningTime="2026-01-20 03:52:02.763284502 +0000 UTC m=+169.363072361" Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.821415 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:52:02 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:52:02 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:52:02 crc kubenswrapper[4898]: healthz check failed Jan 20 03:52:02 crc kubenswrapper[4898]: I0120 03:52:02.821504 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.616625 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 03:52:03 crc kubenswrapper[4898]: E0120 03:52:03.617049 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af44cc46-eb54-4cd4-8161-d1b522d7c41a" containerName="pruner" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.617065 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="af44cc46-eb54-4cd4-8161-d1b522d7c41a" containerName="pruner" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.617184 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="af44cc46-eb54-4cd4-8161-d1b522d7c41a" containerName="pruner" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.617741 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.619493 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.624310 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.624569 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.651784 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55655728-9a3e-4f5f-bc4f-eb4a0471d66a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.651868 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55655728-9a3e-4f5f-bc4f-eb4a0471d66a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.752879 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55655728-9a3e-4f5f-bc4f-eb4a0471d66a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.752959 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55655728-9a3e-4f5f-bc4f-eb4a0471d66a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.753122 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55655728-9a3e-4f5f-bc4f-eb4a0471d66a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.774055 4898 generic.go:334] "Generic (PLEG): container finished" podID="a73f73a2-1335-45a7-867b-18585f1c0862" containerID="3a1a758f5bb760edb0adfaeba23f50e231bc7b6838d3b7f375abb81d9918bc88" exitCode=0 Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.775049 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" event={"ID":"a73f73a2-1335-45a7-867b-18585f1c0862","Type":"ContainerDied","Data":"3a1a758f5bb760edb0adfaeba23f50e231bc7b6838d3b7f375abb81d9918bc88"} Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.790208 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55655728-9a3e-4f5f-bc4f-eb4a0471d66a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.823105 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:52:03 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:52:03 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:52:03 crc kubenswrapper[4898]: healthz check failed Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.823668 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:52:03 crc kubenswrapper[4898]: I0120 03:52:03.953417 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 03:52:04 crc kubenswrapper[4898]: I0120 03:52:04.448412 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 03:52:04 crc kubenswrapper[4898]: W0120 03:52:04.475901 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod55655728_9a3e_4f5f_bc4f_eb4a0471d66a.slice/crio-87a47dac2808258272f034aa360aafa4188321132d324b533b92aa2f07f2c26d WatchSource:0}: Error finding container 87a47dac2808258272f034aa360aafa4188321132d324b533b92aa2f07f2c26d: Status 404 returned error can't find the container with id 87a47dac2808258272f034aa360aafa4188321132d324b533b92aa2f07f2c26d Jan 20 03:52:04 crc kubenswrapper[4898]: I0120 03:52:04.785738 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55655728-9a3e-4f5f-bc4f-eb4a0471d66a","Type":"ContainerStarted","Data":"87a47dac2808258272f034aa360aafa4188321132d324b533b92aa2f07f2c26d"} Jan 20 03:52:04 crc kubenswrapper[4898]: I0120 03:52:04.821987 4898 patch_prober.go:28] interesting pod/router-default-5444994796-bmzz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 03:52:04 crc kubenswrapper[4898]: [-]has-synced failed: reason withheld Jan 20 03:52:04 crc kubenswrapper[4898]: [+]process-running ok Jan 20 03:52:04 crc kubenswrapper[4898]: healthz check failed Jan 20 03:52:04 crc kubenswrapper[4898]: I0120 03:52:04.822064 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bmzz9" podUID="dd9a227b-a085-42ce-b4b7-05fcfd678215" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.069043 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.181026 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pjqr\" (UniqueName: \"kubernetes.io/projected/a73f73a2-1335-45a7-867b-18585f1c0862-kube-api-access-6pjqr\") pod \"a73f73a2-1335-45a7-867b-18585f1c0862\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.181619 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a73f73a2-1335-45a7-867b-18585f1c0862-secret-volume\") pod \"a73f73a2-1335-45a7-867b-18585f1c0862\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.181742 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a73f73a2-1335-45a7-867b-18585f1c0862-config-volume\") pod \"a73f73a2-1335-45a7-867b-18585f1c0862\" (UID: \"a73f73a2-1335-45a7-867b-18585f1c0862\") " Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.182660 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a73f73a2-1335-45a7-867b-18585f1c0862-config-volume" (OuterVolumeSpecName: "config-volume") pod "a73f73a2-1335-45a7-867b-18585f1c0862" (UID: "a73f73a2-1335-45a7-867b-18585f1c0862"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.194515 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a73f73a2-1335-45a7-867b-18585f1c0862-kube-api-access-6pjqr" (OuterVolumeSpecName: "kube-api-access-6pjqr") pod "a73f73a2-1335-45a7-867b-18585f1c0862" (UID: "a73f73a2-1335-45a7-867b-18585f1c0862"). InnerVolumeSpecName "kube-api-access-6pjqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.195005 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a73f73a2-1335-45a7-867b-18585f1c0862-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a73f73a2-1335-45a7-867b-18585f1c0862" (UID: "a73f73a2-1335-45a7-867b-18585f1c0862"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.282940 4898 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a73f73a2-1335-45a7-867b-18585f1c0862-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.282978 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pjqr\" (UniqueName: \"kubernetes.io/projected/a73f73a2-1335-45a7-867b-18585f1c0862-kube-api-access-6pjqr\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.282990 4898 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a73f73a2-1335-45a7-867b-18585f1c0862-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.826858 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.827712 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55655728-9a3e-4f5f-bc4f-eb4a0471d66a","Type":"ContainerStarted","Data":"a8cf82af5666f14bc16bf5bc9dd67abfd702fa0795add776eea357f87798368c"} Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.837156 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-bmzz9" Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.838038 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" event={"ID":"a73f73a2-1335-45a7-867b-18585f1c0862","Type":"ContainerDied","Data":"7ab114c584c64f45456eb0847d3c9287438e3215fd8f96fbf943f2889970ad01"} Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.838091 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ab114c584c64f45456eb0847d3c9287438e3215fd8f96fbf943f2889970ad01" Jan 20 03:52:05 crc kubenswrapper[4898]: I0120 03:52:05.838163 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs" Jan 20 03:52:06 crc kubenswrapper[4898]: I0120 03:52:06.604086 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2xvbf" Jan 20 03:52:06 crc kubenswrapper[4898]: I0120 03:52:06.853065 4898 generic.go:334] "Generic (PLEG): container finished" podID="55655728-9a3e-4f5f-bc4f-eb4a0471d66a" containerID="a8cf82af5666f14bc16bf5bc9dd67abfd702fa0795add776eea357f87798368c" exitCode=0 Jan 20 03:52:06 crc kubenswrapper[4898]: I0120 03:52:06.853548 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55655728-9a3e-4f5f-bc4f-eb4a0471d66a","Type":"ContainerDied","Data":"a8cf82af5666f14bc16bf5bc9dd67abfd702fa0795add776eea357f87798368c"} Jan 20 03:52:09 crc kubenswrapper[4898]: I0120 03:52:09.976194 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 03:52:09 crc kubenswrapper[4898]: I0120 03:52:09.977192 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 03:52:11 crc kubenswrapper[4898]: I0120 03:52:11.158976 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 03:52:11 crc kubenswrapper[4898]: I0120 03:52:11.160405 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:52:11 crc kubenswrapper[4898]: I0120 03:52:11.166635 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 03:52:17 crc kubenswrapper[4898]: I0120 03:52:17.367257 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:52:17 crc kubenswrapper[4898]: I0120 03:52:17.459985 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 03:52:17 crc kubenswrapper[4898]: I0120 03:52:17.617176 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kube-api-access\") pod \"55655728-9a3e-4f5f-bc4f-eb4a0471d66a\" (UID: \"55655728-9a3e-4f5f-bc4f-eb4a0471d66a\") " Jan 20 03:52:17 crc kubenswrapper[4898]: I0120 03:52:17.617290 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kubelet-dir\") pod \"55655728-9a3e-4f5f-bc4f-eb4a0471d66a\" (UID: \"55655728-9a3e-4f5f-bc4f-eb4a0471d66a\") " Jan 20 03:52:17 crc kubenswrapper[4898]: I0120 03:52:17.617489 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "55655728-9a3e-4f5f-bc4f-eb4a0471d66a" (UID: "55655728-9a3e-4f5f-bc4f-eb4a0471d66a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:52:17 crc kubenswrapper[4898]: I0120 03:52:17.623845 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "55655728-9a3e-4f5f-bc4f-eb4a0471d66a" (UID: "55655728-9a3e-4f5f-bc4f-eb4a0471d66a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:52:17 crc kubenswrapper[4898]: I0120 03:52:17.719800 4898 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:17 crc kubenswrapper[4898]: I0120 03:52:17.719968 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55655728-9a3e-4f5f-bc4f-eb4a0471d66a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:17 crc kubenswrapper[4898]: I0120 03:52:17.945070 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55655728-9a3e-4f5f-bc4f-eb4a0471d66a","Type":"ContainerDied","Data":"87a47dac2808258272f034aa360aafa4188321132d324b533b92aa2f07f2c26d"} Jan 20 03:52:17 crc kubenswrapper[4898]: I0120 03:52:17.945114 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 03:52:17 crc kubenswrapper[4898]: I0120 03:52:17.945119 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87a47dac2808258272f034aa360aafa4188321132d324b533b92aa2f07f2c26d" Jan 20 03:52:30 crc kubenswrapper[4898]: I0120 03:52:30.908003 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zwrwk" Jan 20 03:52:31 crc kubenswrapper[4898]: E0120 03:52:31.707040 4898 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 03:52:31 crc kubenswrapper[4898]: E0120 03:52:31.707293 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8skv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-74j5p_openshift-marketplace(46ecbb04-c29f-45a2-88aa-1d58b4d44820): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 03:52:31 crc kubenswrapper[4898]: E0120 03:52:31.708545 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-74j5p" podUID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" Jan 20 03:52:31 crc kubenswrapper[4898]: E0120 03:52:31.743696 4898 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 03:52:31 crc kubenswrapper[4898]: E0120 03:52:31.743857 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rhf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k45j4_openshift-marketplace(1b568186-dd5f-4340-9b0b-f083bf37a1b5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 03:52:31 crc kubenswrapper[4898]: E0120 03:52:31.745046 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-k45j4" podUID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" Jan 20 03:52:34 crc kubenswrapper[4898]: E0120 03:52:34.551262 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-74j5p" podUID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" Jan 20 03:52:34 crc kubenswrapper[4898]: E0120 03:52:34.551424 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-k45j4" podUID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" Jan 20 03:52:34 crc kubenswrapper[4898]: E0120 03:52:34.649596 4898 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 03:52:34 crc kubenswrapper[4898]: E0120 03:52:34.649798 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-khjrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-r4r2t_openshift-marketplace(e5d08231-f987-45cc-ac85-683f31f6e616): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 03:52:34 crc kubenswrapper[4898]: E0120 03:52:34.651063 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-r4r2t" podUID="e5d08231-f987-45cc-ac85-683f31f6e616" Jan 20 03:52:37 crc kubenswrapper[4898]: E0120 03:52:37.066605 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-r4r2t" podUID="e5d08231-f987-45cc-ac85-683f31f6e616" Jan 20 03:52:37 crc kubenswrapper[4898]: E0120 03:52:37.282130 4898 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 03:52:37 crc kubenswrapper[4898]: E0120 03:52:37.282273 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjb9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bvccs_openshift-marketplace(2eab3b38-2b5e-4ab8-8660-a45f19b1d329): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 03:52:37 crc kubenswrapper[4898]: E0120 03:52:37.283651 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bvccs" podUID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" Jan 20 03:52:37 crc kubenswrapper[4898]: E0120 03:52:37.453776 4898 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 03:52:37 crc kubenswrapper[4898]: E0120 03:52:37.453981 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vv77s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ss85s_openshift-marketplace(63bc00c4-5532-4daa-9e22-b7bc5424035d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 03:52:37 crc kubenswrapper[4898]: E0120 03:52:37.455540 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ss85s" podUID="63bc00c4-5532-4daa-9e22-b7bc5424035d" Jan 20 03:52:37 crc kubenswrapper[4898]: E0120 03:52:37.529410 4898 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 03:52:37 crc kubenswrapper[4898]: E0120 03:52:37.529665 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4p4k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f9m8f_openshift-marketplace(f0c2a49d-68aa-428a-87dd-fc3cddb41040): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 03:52:37 crc kubenswrapper[4898]: E0120 03:52:37.531117 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-f9m8f" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" Jan 20 03:52:38 crc kubenswrapper[4898]: I0120 03:52:38.056884 4898 generic.go:334] "Generic (PLEG): container finished" podID="36ab491e-141c-4810-8d67-e31de85498c9" containerID="5332a4cb60c3e49043d9acb15e9fc1679666463c3eb658f01263ce12ab4bbbdd" exitCode=0 Jan 20 03:52:38 crc kubenswrapper[4898]: I0120 03:52:38.056962 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt5nd" event={"ID":"36ab491e-141c-4810-8d67-e31de85498c9","Type":"ContainerDied","Data":"5332a4cb60c3e49043d9acb15e9fc1679666463c3eb658f01263ce12ab4bbbdd"} Jan 20 03:52:38 crc kubenswrapper[4898]: I0120 03:52:38.061011 4898 generic.go:334] "Generic (PLEG): container finished" podID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerID="e00ab1278b11d7cefcaee3b786572c942432fe6c48ae6eeeb080da17efbb54ac" exitCode=0 Jan 20 03:52:38 crc kubenswrapper[4898]: I0120 03:52:38.061046 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgrfj" event={"ID":"70d524b5-855e-4dda-aaa8-5ae9463e7b3c","Type":"ContainerDied","Data":"e00ab1278b11d7cefcaee3b786572c942432fe6c48ae6eeeb080da17efbb54ac"} Jan 20 03:52:38 crc kubenswrapper[4898]: E0120 03:52:38.063533 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bvccs" podUID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" Jan 20 03:52:38 crc kubenswrapper[4898]: E0120 03:52:38.065070 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ss85s" podUID="63bc00c4-5532-4daa-9e22-b7bc5424035d" Jan 20 03:52:38 crc kubenswrapper[4898]: E0120 03:52:38.065225 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9m8f" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.069699 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt5nd" event={"ID":"36ab491e-141c-4810-8d67-e31de85498c9","Type":"ContainerStarted","Data":"45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e"} Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.072294 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgrfj" event={"ID":"70d524b5-855e-4dda-aaa8-5ae9463e7b3c","Type":"ContainerStarted","Data":"35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7"} Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.090765 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wt5nd" podStartSLOduration=1.96298592 podStartE2EDuration="40.090746779s" podCreationTimestamp="2026-01-20 03:51:59 +0000 UTC" firstStartedPulling="2026-01-20 03:52:00.643415993 +0000 UTC m=+167.243203852" lastFinishedPulling="2026-01-20 03:52:38.771176842 +0000 UTC m=+205.370964711" observedRunningTime="2026-01-20 03:52:39.088828412 +0000 UTC m=+205.688616271" watchObservedRunningTime="2026-01-20 03:52:39.090746779 +0000 UTC m=+205.690534648" Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.109000 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qgrfj" podStartSLOduration=2.244209593 podStartE2EDuration="40.108984496s" podCreationTimestamp="2026-01-20 03:51:59 +0000 UTC" firstStartedPulling="2026-01-20 03:52:00.664831792 +0000 UTC m=+167.264619651" lastFinishedPulling="2026-01-20 03:52:38.529606685 +0000 UTC m=+205.129394554" observedRunningTime="2026-01-20 03:52:39.107545343 +0000 UTC m=+205.707333202" watchObservedRunningTime="2026-01-20 03:52:39.108984496 +0000 UTC m=+205.708772365" Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.493512 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.493663 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.563494 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zkb77"] Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.874368 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.874470 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.976200 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.976270 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.976322 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.976918 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 03:52:39 crc kubenswrapper[4898]: I0120 03:52:39.977026 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e" gracePeriod=600 Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.594944 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 03:52:40 crc kubenswrapper[4898]: E0120 03:52:40.595979 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73f73a2-1335-45a7-867b-18585f1c0862" containerName="collect-profiles" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.595995 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73f73a2-1335-45a7-867b-18585f1c0862" containerName="collect-profiles" Jan 20 03:52:40 crc kubenswrapper[4898]: E0120 03:52:40.596010 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55655728-9a3e-4f5f-bc4f-eb4a0471d66a" containerName="pruner" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.596019 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="55655728-9a3e-4f5f-bc4f-eb4a0471d66a" containerName="pruner" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.596165 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="a73f73a2-1335-45a7-867b-18585f1c0862" containerName="collect-profiles" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.596187 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="55655728-9a3e-4f5f-bc4f-eb4a0471d66a" containerName="pruner" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.596801 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.600019 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qgrfj" podUID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerName="registry-server" probeResult="failure" output=< Jan 20 03:52:40 crc kubenswrapper[4898]: timeout: failed to connect service ":50051" within 1s Jan 20 03:52:40 crc kubenswrapper[4898]: > Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.600656 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.601121 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.605762 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.720406 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e92b1193-c02a-4af6-8c2d-b8a651de15d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.720482 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e92b1193-c02a-4af6-8c2d-b8a651de15d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.821716 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e92b1193-c02a-4af6-8c2d-b8a651de15d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.821792 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e92b1193-c02a-4af6-8c2d-b8a651de15d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.821857 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e92b1193-c02a-4af6-8c2d-b8a651de15d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.841187 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e92b1193-c02a-4af6-8c2d-b8a651de15d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.911878 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wt5nd" podUID="36ab491e-141c-4810-8d67-e31de85498c9" containerName="registry-server" probeResult="failure" output=< Jan 20 03:52:40 crc kubenswrapper[4898]: timeout: failed to connect service ":50051" within 1s Jan 20 03:52:40 crc kubenswrapper[4898]: > Jan 20 03:52:40 crc kubenswrapper[4898]: I0120 03:52:40.914248 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 03:52:41 crc kubenswrapper[4898]: I0120 03:52:41.085903 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e" exitCode=0 Jan 20 03:52:41 crc kubenswrapper[4898]: I0120 03:52:41.086023 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e"} Jan 20 03:52:41 crc kubenswrapper[4898]: I0120 03:52:41.086499 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"8cbb7ad6d85d39ea7ff2c1068b3057e97901016363b5fcbcec2aac6f311cf2b5"} Jan 20 03:52:41 crc kubenswrapper[4898]: I0120 03:52:41.325984 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 03:52:41 crc kubenswrapper[4898]: W0120 03:52:41.335592 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode92b1193_c02a_4af6_8c2d_b8a651de15d7.slice/crio-3a1caded76024e7899404b40b61b1e68da03ef12e3c87a6751b699fff5906502 WatchSource:0}: Error finding container 3a1caded76024e7899404b40b61b1e68da03ef12e3c87a6751b699fff5906502: Status 404 returned error can't find the container with id 3a1caded76024e7899404b40b61b1e68da03ef12e3c87a6751b699fff5906502 Jan 20 03:52:42 crc kubenswrapper[4898]: I0120 03:52:42.093067 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e92b1193-c02a-4af6-8c2d-b8a651de15d7","Type":"ContainerStarted","Data":"0028ef233ffaa5a55483205fc0a04cc7b33b82011c3d9e83a5c25f0b21c01a49"} Jan 20 03:52:42 crc kubenswrapper[4898]: I0120 03:52:42.093808 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e92b1193-c02a-4af6-8c2d-b8a651de15d7","Type":"ContainerStarted","Data":"3a1caded76024e7899404b40b61b1e68da03ef12e3c87a6751b699fff5906502"} Jan 20 03:52:43 crc kubenswrapper[4898]: I0120 03:52:43.100325 4898 generic.go:334] "Generic (PLEG): container finished" podID="e92b1193-c02a-4af6-8c2d-b8a651de15d7" containerID="0028ef233ffaa5a55483205fc0a04cc7b33b82011c3d9e83a5c25f0b21c01a49" exitCode=0 Jan 20 03:52:43 crc kubenswrapper[4898]: I0120 03:52:43.100456 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e92b1193-c02a-4af6-8c2d-b8a651de15d7","Type":"ContainerDied","Data":"0028ef233ffaa5a55483205fc0a04cc7b33b82011c3d9e83a5c25f0b21c01a49"} Jan 20 03:52:44 crc kubenswrapper[4898]: I0120 03:52:44.452101 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 03:52:44 crc kubenswrapper[4898]: I0120 03:52:44.572610 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kube-api-access\") pod \"e92b1193-c02a-4af6-8c2d-b8a651de15d7\" (UID: \"e92b1193-c02a-4af6-8c2d-b8a651de15d7\") " Jan 20 03:52:44 crc kubenswrapper[4898]: I0120 03:52:44.572771 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kubelet-dir\") pod \"e92b1193-c02a-4af6-8c2d-b8a651de15d7\" (UID: \"e92b1193-c02a-4af6-8c2d-b8a651de15d7\") " Jan 20 03:52:44 crc kubenswrapper[4898]: I0120 03:52:44.573024 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e92b1193-c02a-4af6-8c2d-b8a651de15d7" (UID: "e92b1193-c02a-4af6-8c2d-b8a651de15d7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:52:44 crc kubenswrapper[4898]: I0120 03:52:44.573189 4898 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:44 crc kubenswrapper[4898]: I0120 03:52:44.581833 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e92b1193-c02a-4af6-8c2d-b8a651de15d7" (UID: "e92b1193-c02a-4af6-8c2d-b8a651de15d7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:52:44 crc kubenswrapper[4898]: I0120 03:52:44.674349 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e92b1193-c02a-4af6-8c2d-b8a651de15d7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:45 crc kubenswrapper[4898]: I0120 03:52:45.112756 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e92b1193-c02a-4af6-8c2d-b8a651de15d7","Type":"ContainerDied","Data":"3a1caded76024e7899404b40b61b1e68da03ef12e3c87a6751b699fff5906502"} Jan 20 03:52:45 crc kubenswrapper[4898]: I0120 03:52:45.112809 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a1caded76024e7899404b40b61b1e68da03ef12e3c87a6751b699fff5906502" Jan 20 03:52:45 crc kubenswrapper[4898]: I0120 03:52:45.112855 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.394392 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 03:52:46 crc kubenswrapper[4898]: E0120 03:52:46.395170 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e92b1193-c02a-4af6-8c2d-b8a651de15d7" containerName="pruner" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.395186 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="e92b1193-c02a-4af6-8c2d-b8a651de15d7" containerName="pruner" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.395309 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="e92b1193-c02a-4af6-8c2d-b8a651de15d7" containerName="pruner" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.395743 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.398168 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.398329 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.415613 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.579015 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9571428a-14e5-47b1-b963-13f4a5bdfaba-kube-api-access\") pod \"installer-9-crc\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.579328 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.579385 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-var-lock\") pod \"installer-9-crc\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.680518 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.680578 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-var-lock\") pod \"installer-9-crc\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.680658 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9571428a-14e5-47b1-b963-13f4a5bdfaba-kube-api-access\") pod \"installer-9-crc\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.680666 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.680716 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-var-lock\") pod \"installer-9-crc\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.698100 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9571428a-14e5-47b1-b963-13f4a5bdfaba-kube-api-access\") pod \"installer-9-crc\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:52:46 crc kubenswrapper[4898]: I0120 03:52:46.740608 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:52:47 crc kubenswrapper[4898]: I0120 03:52:47.147784 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 03:52:47 crc kubenswrapper[4898]: W0120 03:52:47.160593 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9571428a_14e5_47b1_b963_13f4a5bdfaba.slice/crio-d01a14a65cb7dabbe7199012600aef288bec68564f07fc8defa0a72bd4b46c56 WatchSource:0}: Error finding container d01a14a65cb7dabbe7199012600aef288bec68564f07fc8defa0a72bd4b46c56: Status 404 returned error can't find the container with id d01a14a65cb7dabbe7199012600aef288bec68564f07fc8defa0a72bd4b46c56 Jan 20 03:52:48 crc kubenswrapper[4898]: I0120 03:52:48.132856 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9571428a-14e5-47b1-b963-13f4a5bdfaba","Type":"ContainerStarted","Data":"d01a14a65cb7dabbe7199012600aef288bec68564f07fc8defa0a72bd4b46c56"} Jan 20 03:52:49 crc kubenswrapper[4898]: I0120 03:52:49.143970 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9571428a-14e5-47b1-b963-13f4a5bdfaba","Type":"ContainerStarted","Data":"0fad52a88eb7f7a2ad017c176ef95b61b8c7442f2a0397bc2919147587528091"} Jan 20 03:52:49 crc kubenswrapper[4898]: I0120 03:52:49.169887 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.169855496 podStartE2EDuration="3.169855496s" podCreationTimestamp="2026-01-20 03:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:52:49.169843446 +0000 UTC m=+215.769631335" watchObservedRunningTime="2026-01-20 03:52:49.169855496 +0000 UTC m=+215.769643395" Jan 20 03:52:49 crc kubenswrapper[4898]: I0120 03:52:49.583048 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:52:49 crc kubenswrapper[4898]: I0120 03:52:49.678762 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:52:49 crc kubenswrapper[4898]: I0120 03:52:49.915887 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:52:49 crc kubenswrapper[4898]: I0120 03:52:49.965207 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:52:51 crc kubenswrapper[4898]: I0120 03:52:51.161793 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4r2t" event={"ID":"e5d08231-f987-45cc-ac85-683f31f6e616","Type":"ContainerStarted","Data":"431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1"} Jan 20 03:52:51 crc kubenswrapper[4898]: I0120 03:52:51.164040 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74j5p" event={"ID":"46ecbb04-c29f-45a2-88aa-1d58b4d44820","Type":"ContainerStarted","Data":"97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b"} Jan 20 03:52:51 crc kubenswrapper[4898]: I0120 03:52:51.166234 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k45j4" event={"ID":"1b568186-dd5f-4340-9b0b-f083bf37a1b5","Type":"ContainerStarted","Data":"eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6"} Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.184472 4898 generic.go:334] "Generic (PLEG): container finished" podID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" containerID="eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6" exitCode=0 Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.184525 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k45j4" event={"ID":"1b568186-dd5f-4340-9b0b-f083bf37a1b5","Type":"ContainerDied","Data":"eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6"} Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.190367 4898 generic.go:334] "Generic (PLEG): container finished" podID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" containerID="83df61923a4a352eaea6491bd6f88a2956d771e4798e14fcba5fe61f6fcb8455" exitCode=0 Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.190455 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvccs" event={"ID":"2eab3b38-2b5e-4ab8-8660-a45f19b1d329","Type":"ContainerDied","Data":"83df61923a4a352eaea6491bd6f88a2956d771e4798e14fcba5fe61f6fcb8455"} Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.197234 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9m8f" event={"ID":"f0c2a49d-68aa-428a-87dd-fc3cddb41040","Type":"ContainerStarted","Data":"f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7"} Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.200198 4898 generic.go:334] "Generic (PLEG): container finished" podID="e5d08231-f987-45cc-ac85-683f31f6e616" containerID="431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1" exitCode=0 Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.200281 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4r2t" event={"ID":"e5d08231-f987-45cc-ac85-683f31f6e616","Type":"ContainerDied","Data":"431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1"} Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.208313 4898 generic.go:334] "Generic (PLEG): container finished" podID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" containerID="97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b" exitCode=0 Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.208351 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74j5p" event={"ID":"46ecbb04-c29f-45a2-88aa-1d58b4d44820","Type":"ContainerDied","Data":"97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b"} Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.236048 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt5nd"] Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.236397 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wt5nd" podUID="36ab491e-141c-4810-8d67-e31de85498c9" containerName="registry-server" containerID="cri-o://45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e" gracePeriod=2 Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.712488 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.844139 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-catalog-content\") pod \"36ab491e-141c-4810-8d67-e31de85498c9\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.844195 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvbj2\" (UniqueName: \"kubernetes.io/projected/36ab491e-141c-4810-8d67-e31de85498c9-kube-api-access-kvbj2\") pod \"36ab491e-141c-4810-8d67-e31de85498c9\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.844279 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-utilities\") pod \"36ab491e-141c-4810-8d67-e31de85498c9\" (UID: \"36ab491e-141c-4810-8d67-e31de85498c9\") " Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.845774 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-utilities" (OuterVolumeSpecName: "utilities") pod "36ab491e-141c-4810-8d67-e31de85498c9" (UID: "36ab491e-141c-4810-8d67-e31de85498c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.854135 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ab491e-141c-4810-8d67-e31de85498c9-kube-api-access-kvbj2" (OuterVolumeSpecName: "kube-api-access-kvbj2") pod "36ab491e-141c-4810-8d67-e31de85498c9" (UID: "36ab491e-141c-4810-8d67-e31de85498c9"). InnerVolumeSpecName "kube-api-access-kvbj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.879194 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36ab491e-141c-4810-8d67-e31de85498c9" (UID: "36ab491e-141c-4810-8d67-e31de85498c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.946106 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.946140 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvbj2\" (UniqueName: \"kubernetes.io/projected/36ab491e-141c-4810-8d67-e31de85498c9-kube-api-access-kvbj2\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:52 crc kubenswrapper[4898]: I0120 03:52:52.946152 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36ab491e-141c-4810-8d67-e31de85498c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.218640 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4r2t" event={"ID":"e5d08231-f987-45cc-ac85-683f31f6e616","Type":"ContainerStarted","Data":"4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3"} Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.220385 4898 generic.go:334] "Generic (PLEG): container finished" podID="36ab491e-141c-4810-8d67-e31de85498c9" containerID="45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e" exitCode=0 Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.220461 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt5nd" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.220458 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt5nd" event={"ID":"36ab491e-141c-4810-8d67-e31de85498c9","Type":"ContainerDied","Data":"45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e"} Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.220569 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt5nd" event={"ID":"36ab491e-141c-4810-8d67-e31de85498c9","Type":"ContainerDied","Data":"de6a2230c8e08158d9b7873764d8ec814fb407c6a035f00b83b8a7335a2ce07d"} Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.220587 4898 scope.go:117] "RemoveContainer" containerID="45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.222257 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74j5p" event={"ID":"46ecbb04-c29f-45a2-88aa-1d58b4d44820","Type":"ContainerStarted","Data":"978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f"} Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.224473 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k45j4" event={"ID":"1b568186-dd5f-4340-9b0b-f083bf37a1b5","Type":"ContainerStarted","Data":"f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394"} Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.228295 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvccs" event={"ID":"2eab3b38-2b5e-4ab8-8660-a45f19b1d329","Type":"ContainerStarted","Data":"3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4"} Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.231834 4898 generic.go:334] "Generic (PLEG): container finished" podID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerID="f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7" exitCode=0 Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.231874 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9m8f" event={"ID":"f0c2a49d-68aa-428a-87dd-fc3cddb41040","Type":"ContainerDied","Data":"f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7"} Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.235583 4898 scope.go:117] "RemoveContainer" containerID="5332a4cb60c3e49043d9acb15e9fc1679666463c3eb658f01263ce12ab4bbbdd" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.241952 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r4r2t" podStartSLOduration=3.226612499 podStartE2EDuration="53.241932721s" podCreationTimestamp="2026-01-20 03:52:00 +0000 UTC" firstStartedPulling="2026-01-20 03:52:02.734652411 +0000 UTC m=+169.334440270" lastFinishedPulling="2026-01-20 03:52:52.749972623 +0000 UTC m=+219.349760492" observedRunningTime="2026-01-20 03:52:53.238812817 +0000 UTC m=+219.838600666" watchObservedRunningTime="2026-01-20 03:52:53.241932721 +0000 UTC m=+219.841720590" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.265732 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bvccs" podStartSLOduration=1.9850176290000001 podStartE2EDuration="56.265716555s" podCreationTimestamp="2026-01-20 03:51:57 +0000 UTC" firstStartedPulling="2026-01-20 03:51:58.603352229 +0000 UTC m=+165.203140088" lastFinishedPulling="2026-01-20 03:52:52.884051155 +0000 UTC m=+219.483839014" observedRunningTime="2026-01-20 03:52:53.26424798 +0000 UTC m=+219.864035909" watchObservedRunningTime="2026-01-20 03:52:53.265716555 +0000 UTC m=+219.865504414" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.266503 4898 scope.go:117] "RemoveContainer" containerID="52b124f73d48cd46c71d945bddfa17108207fdc5bab61c1d30235b0328780096" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.289131 4898 scope.go:117] "RemoveContainer" containerID="45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e" Jan 20 03:52:53 crc kubenswrapper[4898]: E0120 03:52:53.289639 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e\": container with ID starting with 45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e not found: ID does not exist" containerID="45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.289690 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e"} err="failed to get container status \"45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e\": rpc error: code = NotFound desc = could not find container \"45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e\": container with ID starting with 45ed2ac0aaf918211817663b404e36a41e98e2e5ccb51dd4bb40a173bd9a1c9e not found: ID does not exist" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.289718 4898 scope.go:117] "RemoveContainer" containerID="5332a4cb60c3e49043d9acb15e9fc1679666463c3eb658f01263ce12ab4bbbdd" Jan 20 03:52:53 crc kubenswrapper[4898]: E0120 03:52:53.289949 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5332a4cb60c3e49043d9acb15e9fc1679666463c3eb658f01263ce12ab4bbbdd\": container with ID starting with 5332a4cb60c3e49043d9acb15e9fc1679666463c3eb658f01263ce12ab4bbbdd not found: ID does not exist" containerID="5332a4cb60c3e49043d9acb15e9fc1679666463c3eb658f01263ce12ab4bbbdd" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.289978 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5332a4cb60c3e49043d9acb15e9fc1679666463c3eb658f01263ce12ab4bbbdd"} err="failed to get container status \"5332a4cb60c3e49043d9acb15e9fc1679666463c3eb658f01263ce12ab4bbbdd\": rpc error: code = NotFound desc = could not find container \"5332a4cb60c3e49043d9acb15e9fc1679666463c3eb658f01263ce12ab4bbbdd\": container with ID starting with 5332a4cb60c3e49043d9acb15e9fc1679666463c3eb658f01263ce12ab4bbbdd not found: ID does not exist" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.289996 4898 scope.go:117] "RemoveContainer" containerID="52b124f73d48cd46c71d945bddfa17108207fdc5bab61c1d30235b0328780096" Jan 20 03:52:53 crc kubenswrapper[4898]: E0120 03:52:53.291066 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b124f73d48cd46c71d945bddfa17108207fdc5bab61c1d30235b0328780096\": container with ID starting with 52b124f73d48cd46c71d945bddfa17108207fdc5bab61c1d30235b0328780096 not found: ID does not exist" containerID="52b124f73d48cd46c71d945bddfa17108207fdc5bab61c1d30235b0328780096" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.291104 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b124f73d48cd46c71d945bddfa17108207fdc5bab61c1d30235b0328780096"} err="failed to get container status \"52b124f73d48cd46c71d945bddfa17108207fdc5bab61c1d30235b0328780096\": rpc error: code = NotFound desc = could not find container \"52b124f73d48cd46c71d945bddfa17108207fdc5bab61c1d30235b0328780096\": container with ID starting with 52b124f73d48cd46c71d945bddfa17108207fdc5bab61c1d30235b0328780096 not found: ID does not exist" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.302661 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k45j4" podStartSLOduration=3.298698281 podStartE2EDuration="57.302652372s" podCreationTimestamp="2026-01-20 03:51:56 +0000 UTC" firstStartedPulling="2026-01-20 03:51:58.609283781 +0000 UTC m=+165.209071640" lastFinishedPulling="2026-01-20 03:52:52.613237872 +0000 UTC m=+219.213025731" observedRunningTime="2026-01-20 03:52:53.301375574 +0000 UTC m=+219.901163433" watchObservedRunningTime="2026-01-20 03:52:53.302652372 +0000 UTC m=+219.902440231" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.322047 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-74j5p" podStartSLOduration=2.235047023 podStartE2EDuration="56.322031833s" podCreationTimestamp="2026-01-20 03:51:57 +0000 UTC" firstStartedPulling="2026-01-20 03:51:58.609529089 +0000 UTC m=+165.209316948" lastFinishedPulling="2026-01-20 03:52:52.696513879 +0000 UTC m=+219.296301758" observedRunningTime="2026-01-20 03:52:53.320014683 +0000 UTC m=+219.919802542" watchObservedRunningTime="2026-01-20 03:52:53.322031833 +0000 UTC m=+219.921819692" Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.338130 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt5nd"] Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.344001 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt5nd"] Jan 20 03:52:53 crc kubenswrapper[4898]: I0120 03:52:53.730700 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ab491e-141c-4810-8d67-e31de85498c9" path="/var/lib/kubelet/pods/36ab491e-141c-4810-8d67-e31de85498c9/volumes" Jan 20 03:52:54 crc kubenswrapper[4898]: I0120 03:52:54.240063 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9m8f" event={"ID":"f0c2a49d-68aa-428a-87dd-fc3cddb41040","Type":"ContainerStarted","Data":"fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b"} Jan 20 03:52:54 crc kubenswrapper[4898]: I0120 03:52:54.242662 4898 generic.go:334] "Generic (PLEG): container finished" podID="63bc00c4-5532-4daa-9e22-b7bc5424035d" containerID="978aeb77bdfde9ee352b5847f4b26ff78ea060a33bd239ba4ff1ac9e29886d1a" exitCode=0 Jan 20 03:52:54 crc kubenswrapper[4898]: I0120 03:52:54.242706 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss85s" event={"ID":"63bc00c4-5532-4daa-9e22-b7bc5424035d","Type":"ContainerDied","Data":"978aeb77bdfde9ee352b5847f4b26ff78ea060a33bd239ba4ff1ac9e29886d1a"} Jan 20 03:52:54 crc kubenswrapper[4898]: I0120 03:52:54.257512 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9m8f" podStartSLOduration=2.069877409 podStartE2EDuration="54.257495675s" podCreationTimestamp="2026-01-20 03:52:00 +0000 UTC" firstStartedPulling="2026-01-20 03:52:01.713399196 +0000 UTC m=+168.313187055" lastFinishedPulling="2026-01-20 03:52:53.901017452 +0000 UTC m=+220.500805321" observedRunningTime="2026-01-20 03:52:54.254605968 +0000 UTC m=+220.854393827" watchObservedRunningTime="2026-01-20 03:52:54.257495675 +0000 UTC m=+220.857283534" Jan 20 03:52:55 crc kubenswrapper[4898]: I0120 03:52:55.253104 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss85s" event={"ID":"63bc00c4-5532-4daa-9e22-b7bc5424035d","Type":"ContainerStarted","Data":"6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5"} Jan 20 03:52:55 crc kubenswrapper[4898]: I0120 03:52:55.284323 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ss85s" podStartSLOduration=2.196659833 podStartE2EDuration="58.284287618s" podCreationTimestamp="2026-01-20 03:51:57 +0000 UTC" firstStartedPulling="2026-01-20 03:51:58.609727565 +0000 UTC m=+165.209515424" lastFinishedPulling="2026-01-20 03:52:54.69735535 +0000 UTC m=+221.297143209" observedRunningTime="2026-01-20 03:52:55.279876325 +0000 UTC m=+221.879664194" watchObservedRunningTime="2026-01-20 03:52:55.284287618 +0000 UTC m=+221.884075507" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.254662 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.254726 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.322078 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.382939 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.456861 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.458747 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.520287 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.657367 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.658059 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.730587 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.865026 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.865079 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:52:57 crc kubenswrapper[4898]: I0120 03:52:57.944351 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:52:58 crc kubenswrapper[4898]: I0120 03:52:58.343308 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:52:58 crc kubenswrapper[4898]: I0120 03:52:58.361364 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:53:00 crc kubenswrapper[4898]: I0120 03:53:00.538281 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:53:00 crc kubenswrapper[4898]: I0120 03:53:00.538744 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:53:00 crc kubenswrapper[4898]: I0120 03:53:00.940176 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:53:00 crc kubenswrapper[4898]: I0120 03:53:00.940251 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:53:01 crc kubenswrapper[4898]: I0120 03:53:01.038950 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:53:01 crc kubenswrapper[4898]: I0120 03:53:01.362239 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:53:01 crc kubenswrapper[4898]: I0120 03:53:01.609116 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9m8f" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerName="registry-server" probeResult="failure" output=< Jan 20 03:53:01 crc kubenswrapper[4898]: timeout: failed to connect service ":50051" within 1s Jan 20 03:53:01 crc kubenswrapper[4898]: > Jan 20 03:53:01 crc kubenswrapper[4898]: I0120 03:53:01.632629 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-74j5p"] Jan 20 03:53:01 crc kubenswrapper[4898]: I0120 03:53:01.633232 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-74j5p" podUID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" containerName="registry-server" containerID="cri-o://978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f" gracePeriod=2 Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.147543 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.230625 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-utilities\") pod \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.230739 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-catalog-content\") pod \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.230786 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8skv\" (UniqueName: \"kubernetes.io/projected/46ecbb04-c29f-45a2-88aa-1d58b4d44820-kube-api-access-t8skv\") pod \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\" (UID: \"46ecbb04-c29f-45a2-88aa-1d58b4d44820\") " Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.257237 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-utilities" (OuterVolumeSpecName: "utilities") pod "46ecbb04-c29f-45a2-88aa-1d58b4d44820" (UID: "46ecbb04-c29f-45a2-88aa-1d58b4d44820"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.267299 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ecbb04-c29f-45a2-88aa-1d58b4d44820-kube-api-access-t8skv" (OuterVolumeSpecName: "kube-api-access-t8skv") pod "46ecbb04-c29f-45a2-88aa-1d58b4d44820" (UID: "46ecbb04-c29f-45a2-88aa-1d58b4d44820"). InnerVolumeSpecName "kube-api-access-t8skv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.312556 4898 generic.go:334] "Generic (PLEG): container finished" podID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" containerID="978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f" exitCode=0 Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.312629 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74j5p" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.312676 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74j5p" event={"ID":"46ecbb04-c29f-45a2-88aa-1d58b4d44820","Type":"ContainerDied","Data":"978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f"} Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.312754 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74j5p" event={"ID":"46ecbb04-c29f-45a2-88aa-1d58b4d44820","Type":"ContainerDied","Data":"e35082825b756e6cc9513cdffecf53f50cf697d3d24e03ea4a0062a715e5683a"} Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.312786 4898 scope.go:117] "RemoveContainer" containerID="978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.313285 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46ecbb04-c29f-45a2-88aa-1d58b4d44820" (UID: "46ecbb04-c29f-45a2-88aa-1d58b4d44820"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.328136 4898 scope.go:117] "RemoveContainer" containerID="97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.331695 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.331977 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ecbb04-c29f-45a2-88aa-1d58b4d44820-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.332001 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8skv\" (UniqueName: \"kubernetes.io/projected/46ecbb04-c29f-45a2-88aa-1d58b4d44820-kube-api-access-t8skv\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.343172 4898 scope.go:117] "RemoveContainer" containerID="0ea006117be734c07bbcf9760854f32e5e216c07968c58c4f6d4fd0812264cef" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.362190 4898 scope.go:117] "RemoveContainer" containerID="978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f" Jan 20 03:53:02 crc kubenswrapper[4898]: E0120 03:53:02.362935 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f\": container with ID starting with 978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f not found: ID does not exist" containerID="978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.362984 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f"} err="failed to get container status \"978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f\": rpc error: code = NotFound desc = could not find container \"978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f\": container with ID starting with 978131b4d1ccc2b77d33f8e852c7994decd5a81acd39d93692f9068d66b1873f not found: ID does not exist" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.363016 4898 scope.go:117] "RemoveContainer" containerID="97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b" Jan 20 03:53:02 crc kubenswrapper[4898]: E0120 03:53:02.363393 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b\": container with ID starting with 97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b not found: ID does not exist" containerID="97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.363416 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b"} err="failed to get container status \"97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b\": rpc error: code = NotFound desc = could not find container \"97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b\": container with ID starting with 97fd6dff322ef617ce1627316a63e64e2a756863f0a43e8a1926c5434328f12b not found: ID does not exist" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.363446 4898 scope.go:117] "RemoveContainer" containerID="0ea006117be734c07bbcf9760854f32e5e216c07968c58c4f6d4fd0812264cef" Jan 20 03:53:02 crc kubenswrapper[4898]: E0120 03:53:02.363774 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea006117be734c07bbcf9760854f32e5e216c07968c58c4f6d4fd0812264cef\": container with ID starting with 0ea006117be734c07bbcf9760854f32e5e216c07968c58c4f6d4fd0812264cef not found: ID does not exist" containerID="0ea006117be734c07bbcf9760854f32e5e216c07968c58c4f6d4fd0812264cef" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.363794 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea006117be734c07bbcf9760854f32e5e216c07968c58c4f6d4fd0812264cef"} err="failed to get container status \"0ea006117be734c07bbcf9760854f32e5e216c07968c58c4f6d4fd0812264cef\": rpc error: code = NotFound desc = could not find container \"0ea006117be734c07bbcf9760854f32e5e216c07968c58c4f6d4fd0812264cef\": container with ID starting with 0ea006117be734c07bbcf9760854f32e5e216c07968c58c4f6d4fd0812264cef not found: ID does not exist" Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.639214 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-74j5p"] Jan 20 03:53:02 crc kubenswrapper[4898]: I0120 03:53:02.645318 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-74j5p"] Jan 20 03:53:03 crc kubenswrapper[4898]: I0120 03:53:03.729194 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" path="/var/lib/kubelet/pods/46ecbb04-c29f-45a2-88aa-1d58b4d44820/volumes" Jan 20 03:53:04 crc kubenswrapper[4898]: I0120 03:53:04.592365 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" podUID="620006cf-5c3f-457c-a416-30384cf951ec" containerName="oauth-openshift" containerID="cri-o://53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7" gracePeriod=15 Jan 20 03:53:04 crc kubenswrapper[4898]: I0120 03:53:04.632268 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r4r2t"] Jan 20 03:53:04 crc kubenswrapper[4898]: I0120 03:53:04.632685 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r4r2t" podUID="e5d08231-f987-45cc-ac85-683f31f6e616" containerName="registry-server" containerID="cri-o://4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3" gracePeriod=2 Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.088751 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.094377 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104153 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-cliconfig\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104210 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-serving-cert\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104240 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-router-certs\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104265 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-login\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104286 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-provider-selection\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104309 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-error\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104324 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-ocp-branding-template\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104345 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-service-ca\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104368 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-idp-0-file-data\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104397 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-audit-policies\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104448 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-session\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104516 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fttgw\" (UniqueName: \"kubernetes.io/projected/620006cf-5c3f-457c-a416-30384cf951ec-kube-api-access-fttgw\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104542 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/620006cf-5c3f-457c-a416-30384cf951ec-audit-dir\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.104560 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-trusted-ca-bundle\") pod \"620006cf-5c3f-457c-a416-30384cf951ec\" (UID: \"620006cf-5c3f-457c-a416-30384cf951ec\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.105580 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.105826 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.106852 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.107145 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.109622 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620006cf-5c3f-457c-a416-30384cf951ec-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.114063 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.114400 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.114612 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.114771 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.115330 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.121585 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.123691 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.129874 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620006cf-5c3f-457c-a416-30384cf951ec-kube-api-access-fttgw" (OuterVolumeSpecName: "kube-api-access-fttgw") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "kube-api-access-fttgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.140475 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "620006cf-5c3f-457c-a416-30384cf951ec" (UID: "620006cf-5c3f-457c-a416-30384cf951ec"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206007 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-utilities\") pod \"e5d08231-f987-45cc-ac85-683f31f6e616\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206244 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khjrn\" (UniqueName: \"kubernetes.io/projected/e5d08231-f987-45cc-ac85-683f31f6e616-kube-api-access-khjrn\") pod \"e5d08231-f987-45cc-ac85-683f31f6e616\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206292 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-catalog-content\") pod \"e5d08231-f987-45cc-ac85-683f31f6e616\" (UID: \"e5d08231-f987-45cc-ac85-683f31f6e616\") " Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206696 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206715 4898 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206728 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206738 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fttgw\" (UniqueName: \"kubernetes.io/projected/620006cf-5c3f-457c-a416-30384cf951ec-kube-api-access-fttgw\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206748 4898 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/620006cf-5c3f-457c-a416-30384cf951ec-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206757 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206766 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206776 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206786 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206795 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206807 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206819 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206828 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.206838 4898 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/620006cf-5c3f-457c-a416-30384cf951ec-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.207194 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-utilities" (OuterVolumeSpecName: "utilities") pod "e5d08231-f987-45cc-ac85-683f31f6e616" (UID: "e5d08231-f987-45cc-ac85-683f31f6e616"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.209243 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d08231-f987-45cc-ac85-683f31f6e616-kube-api-access-khjrn" (OuterVolumeSpecName: "kube-api-access-khjrn") pod "e5d08231-f987-45cc-ac85-683f31f6e616" (UID: "e5d08231-f987-45cc-ac85-683f31f6e616"). InnerVolumeSpecName "kube-api-access-khjrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.308386 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khjrn\" (UniqueName: \"kubernetes.io/projected/e5d08231-f987-45cc-ac85-683f31f6e616-kube-api-access-khjrn\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.308446 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.326093 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5d08231-f987-45cc-ac85-683f31f6e616" (UID: "e5d08231-f987-45cc-ac85-683f31f6e616"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.356686 4898 generic.go:334] "Generic (PLEG): container finished" podID="e5d08231-f987-45cc-ac85-683f31f6e616" containerID="4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3" exitCode=0 Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.356795 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4r2t" event={"ID":"e5d08231-f987-45cc-ac85-683f31f6e616","Type":"ContainerDied","Data":"4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3"} Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.356929 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4r2t" event={"ID":"e5d08231-f987-45cc-ac85-683f31f6e616","Type":"ContainerDied","Data":"ca6bd4de4e6200251b74a40431e7332e5ffd6886dd0e06effadbba03f015648a"} Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.356965 4898 scope.go:117] "RemoveContainer" containerID="4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.357651 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4r2t" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.358628 4898 generic.go:334] "Generic (PLEG): container finished" podID="620006cf-5c3f-457c-a416-30384cf951ec" containerID="53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7" exitCode=0 Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.358667 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" event={"ID":"620006cf-5c3f-457c-a416-30384cf951ec","Type":"ContainerDied","Data":"53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7"} Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.358705 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" event={"ID":"620006cf-5c3f-457c-a416-30384cf951ec","Type":"ContainerDied","Data":"409e57cd9dab888e981e9956d53e61494e87187fb3ef025544cef63111338965"} Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.358779 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zkb77" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.383425 4898 scope.go:117] "RemoveContainer" containerID="431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.396315 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zkb77"] Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.405678 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zkb77"] Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.410867 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d08231-f987-45cc-ac85-683f31f6e616-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.417736 4898 scope.go:117] "RemoveContainer" containerID="b35dbf2679cfe1ef1f0b853a16fb9949bd62aeaae8c989809dda0ea56e6a25e0" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.421035 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r4r2t"] Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.425414 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r4r2t"] Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.438821 4898 scope.go:117] "RemoveContainer" containerID="4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3" Jan 20 03:53:07 crc kubenswrapper[4898]: E0120 03:53:07.439337 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3\": container with ID starting with 4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3 not found: ID does not exist" containerID="4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.439378 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3"} err="failed to get container status \"4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3\": rpc error: code = NotFound desc = could not find container \"4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3\": container with ID starting with 4f4bcc97ed4277e18f3e7342aaa84347c53ba45d0d7caa030de24bf0b20b79b3 not found: ID does not exist" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.439416 4898 scope.go:117] "RemoveContainer" containerID="431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1" Jan 20 03:53:07 crc kubenswrapper[4898]: E0120 03:53:07.440056 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1\": container with ID starting with 431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1 not found: ID does not exist" containerID="431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.440114 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1"} err="failed to get container status \"431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1\": rpc error: code = NotFound desc = could not find container \"431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1\": container with ID starting with 431847b9ee0d00b27baadd6a7c815510cfec1160291d1081387e6eb9c86100c1 not found: ID does not exist" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.440161 4898 scope.go:117] "RemoveContainer" containerID="b35dbf2679cfe1ef1f0b853a16fb9949bd62aeaae8c989809dda0ea56e6a25e0" Jan 20 03:53:07 crc kubenswrapper[4898]: E0120 03:53:07.440900 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35dbf2679cfe1ef1f0b853a16fb9949bd62aeaae8c989809dda0ea56e6a25e0\": container with ID starting with b35dbf2679cfe1ef1f0b853a16fb9949bd62aeaae8c989809dda0ea56e6a25e0 not found: ID does not exist" containerID="b35dbf2679cfe1ef1f0b853a16fb9949bd62aeaae8c989809dda0ea56e6a25e0" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.441074 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35dbf2679cfe1ef1f0b853a16fb9949bd62aeaae8c989809dda0ea56e6a25e0"} err="failed to get container status \"b35dbf2679cfe1ef1f0b853a16fb9949bd62aeaae8c989809dda0ea56e6a25e0\": rpc error: code = NotFound desc = could not find container \"b35dbf2679cfe1ef1f0b853a16fb9949bd62aeaae8c989809dda0ea56e6a25e0\": container with ID starting with b35dbf2679cfe1ef1f0b853a16fb9949bd62aeaae8c989809dda0ea56e6a25e0 not found: ID does not exist" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.441229 4898 scope.go:117] "RemoveContainer" containerID="53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.475562 4898 scope.go:117] "RemoveContainer" containerID="53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7" Jan 20 03:53:07 crc kubenswrapper[4898]: E0120 03:53:07.476255 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7\": container with ID starting with 53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7 not found: ID does not exist" containerID="53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.476301 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7"} err="failed to get container status \"53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7\": rpc error: code = NotFound desc = could not find container \"53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7\": container with ID starting with 53ded5b152cb58058765366c78b5ac2848d996b8f3323a34cfa60f9d3ed46ee7 not found: ID does not exist" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.728464 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620006cf-5c3f-457c-a416-30384cf951ec" path="/var/lib/kubelet/pods/620006cf-5c3f-457c-a416-30384cf951ec/volumes" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.728967 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d08231-f987-45cc-ac85-683f31f6e616" path="/var/lib/kubelet/pods/e5d08231-f987-45cc-ac85-683f31f6e616/volumes" Jan 20 03:53:07 crc kubenswrapper[4898]: I0120 03:53:07.929309 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:53:09 crc kubenswrapper[4898]: I0120 03:53:09.632010 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ss85s"] Jan 20 03:53:09 crc kubenswrapper[4898]: I0120 03:53:09.633054 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ss85s" podUID="63bc00c4-5532-4daa-9e22-b7bc5424035d" containerName="registry-server" containerID="cri-o://6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5" gracePeriod=2 Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.186102 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.261991 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-catalog-content\") pod \"63bc00c4-5532-4daa-9e22-b7bc5424035d\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.262053 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv77s\" (UniqueName: \"kubernetes.io/projected/63bc00c4-5532-4daa-9e22-b7bc5424035d-kube-api-access-vv77s\") pod \"63bc00c4-5532-4daa-9e22-b7bc5424035d\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.262147 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-utilities\") pod \"63bc00c4-5532-4daa-9e22-b7bc5424035d\" (UID: \"63bc00c4-5532-4daa-9e22-b7bc5424035d\") " Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.264034 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-utilities" (OuterVolumeSpecName: "utilities") pod "63bc00c4-5532-4daa-9e22-b7bc5424035d" (UID: "63bc00c4-5532-4daa-9e22-b7bc5424035d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.272055 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63bc00c4-5532-4daa-9e22-b7bc5424035d-kube-api-access-vv77s" (OuterVolumeSpecName: "kube-api-access-vv77s") pod "63bc00c4-5532-4daa-9e22-b7bc5424035d" (UID: "63bc00c4-5532-4daa-9e22-b7bc5424035d"). InnerVolumeSpecName "kube-api-access-vv77s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.336271 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63bc00c4-5532-4daa-9e22-b7bc5424035d" (UID: "63bc00c4-5532-4daa-9e22-b7bc5424035d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.363622 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.363654 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv77s\" (UniqueName: \"kubernetes.io/projected/63bc00c4-5532-4daa-9e22-b7bc5424035d-kube-api-access-vv77s\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.363669 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bc00c4-5532-4daa-9e22-b7bc5424035d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.384357 4898 generic.go:334] "Generic (PLEG): container finished" podID="63bc00c4-5532-4daa-9e22-b7bc5424035d" containerID="6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5" exitCode=0 Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.384464 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss85s" event={"ID":"63bc00c4-5532-4daa-9e22-b7bc5424035d","Type":"ContainerDied","Data":"6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5"} Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.384483 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ss85s" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.384516 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss85s" event={"ID":"63bc00c4-5532-4daa-9e22-b7bc5424035d","Type":"ContainerDied","Data":"37a0dc87d0f8f3577936f99bd50e1552132d29055201f0cd68f141fd54e25d9a"} Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.384548 4898 scope.go:117] "RemoveContainer" containerID="6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.414834 4898 scope.go:117] "RemoveContainer" containerID="978aeb77bdfde9ee352b5847f4b26ff78ea060a33bd239ba4ff1ac9e29886d1a" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.425722 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ss85s"] Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.430134 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ss85s"] Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.457633 4898 scope.go:117] "RemoveContainer" containerID="4a55e45012aa85bc3e66c8bcc3844fdc4a93dfecf419588d3f475174e84fd8a5" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.484408 4898 scope.go:117] "RemoveContainer" containerID="6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5" Jan 20 03:53:10 crc kubenswrapper[4898]: E0120 03:53:10.485077 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5\": container with ID starting with 6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5 not found: ID does not exist" containerID="6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.485171 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5"} err="failed to get container status \"6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5\": rpc error: code = NotFound desc = could not find container \"6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5\": container with ID starting with 6e82d07615c181ff2bb6ef4a37d713031060cf69a6dda6829a0836f224d73bb5 not found: ID does not exist" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.485234 4898 scope.go:117] "RemoveContainer" containerID="978aeb77bdfde9ee352b5847f4b26ff78ea060a33bd239ba4ff1ac9e29886d1a" Jan 20 03:53:10 crc kubenswrapper[4898]: E0120 03:53:10.485883 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"978aeb77bdfde9ee352b5847f4b26ff78ea060a33bd239ba4ff1ac9e29886d1a\": container with ID starting with 978aeb77bdfde9ee352b5847f4b26ff78ea060a33bd239ba4ff1ac9e29886d1a not found: ID does not exist" containerID="978aeb77bdfde9ee352b5847f4b26ff78ea060a33bd239ba4ff1ac9e29886d1a" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.485933 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"978aeb77bdfde9ee352b5847f4b26ff78ea060a33bd239ba4ff1ac9e29886d1a"} err="failed to get container status \"978aeb77bdfde9ee352b5847f4b26ff78ea060a33bd239ba4ff1ac9e29886d1a\": rpc error: code = NotFound desc = could not find container \"978aeb77bdfde9ee352b5847f4b26ff78ea060a33bd239ba4ff1ac9e29886d1a\": container with ID starting with 978aeb77bdfde9ee352b5847f4b26ff78ea060a33bd239ba4ff1ac9e29886d1a not found: ID does not exist" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.485963 4898 scope.go:117] "RemoveContainer" containerID="4a55e45012aa85bc3e66c8bcc3844fdc4a93dfecf419588d3f475174e84fd8a5" Jan 20 03:53:10 crc kubenswrapper[4898]: E0120 03:53:10.486579 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a55e45012aa85bc3e66c8bcc3844fdc4a93dfecf419588d3f475174e84fd8a5\": container with ID starting with 4a55e45012aa85bc3e66c8bcc3844fdc4a93dfecf419588d3f475174e84fd8a5 not found: ID does not exist" containerID="4a55e45012aa85bc3e66c8bcc3844fdc4a93dfecf419588d3f475174e84fd8a5" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.486644 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a55e45012aa85bc3e66c8bcc3844fdc4a93dfecf419588d3f475174e84fd8a5"} err="failed to get container status \"4a55e45012aa85bc3e66c8bcc3844fdc4a93dfecf419588d3f475174e84fd8a5\": rpc error: code = NotFound desc = could not find container \"4a55e45012aa85bc3e66c8bcc3844fdc4a93dfecf419588d3f475174e84fd8a5\": container with ID starting with 4a55e45012aa85bc3e66c8bcc3844fdc4a93dfecf419588d3f475174e84fd8a5 not found: ID does not exist" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.609630 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:53:10 crc kubenswrapper[4898]: I0120 03:53:10.664232 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039036 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-69b55d54f6-dmxkf"] Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039349 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ab491e-141c-4810-8d67-e31de85498c9" containerName="extract-utilities" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039370 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ab491e-141c-4810-8d67-e31de85498c9" containerName="extract-utilities" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039390 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039405 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039422 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d08231-f987-45cc-ac85-683f31f6e616" containerName="extract-content" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039461 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d08231-f987-45cc-ac85-683f31f6e616" containerName="extract-content" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039479 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ab491e-141c-4810-8d67-e31de85498c9" containerName="extract-content" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039489 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ab491e-141c-4810-8d67-e31de85498c9" containerName="extract-content" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039501 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d08231-f987-45cc-ac85-683f31f6e616" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039511 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d08231-f987-45cc-ac85-683f31f6e616" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039523 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d08231-f987-45cc-ac85-683f31f6e616" containerName="extract-utilities" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039534 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d08231-f987-45cc-ac85-683f31f6e616" containerName="extract-utilities" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039554 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620006cf-5c3f-457c-a416-30384cf951ec" containerName="oauth-openshift" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039564 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="620006cf-5c3f-457c-a416-30384cf951ec" containerName="oauth-openshift" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039575 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" containerName="extract-utilities" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039584 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" containerName="extract-utilities" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039596 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bc00c4-5532-4daa-9e22-b7bc5424035d" containerName="extract-utilities" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039606 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bc00c4-5532-4daa-9e22-b7bc5424035d" containerName="extract-utilities" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039619 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" containerName="extract-content" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039629 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" containerName="extract-content" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039644 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ab491e-141c-4810-8d67-e31de85498c9" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039654 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ab491e-141c-4810-8d67-e31de85498c9" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039669 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bc00c4-5532-4daa-9e22-b7bc5424035d" containerName="extract-content" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039681 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bc00c4-5532-4daa-9e22-b7bc5424035d" containerName="extract-content" Jan 20 03:53:11 crc kubenswrapper[4898]: E0120 03:53:11.039698 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bc00c4-5532-4daa-9e22-b7bc5424035d" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039710 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bc00c4-5532-4daa-9e22-b7bc5424035d" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039884 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ecbb04-c29f-45a2-88aa-1d58b4d44820" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039905 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="63bc00c4-5532-4daa-9e22-b7bc5424035d" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039922 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ab491e-141c-4810-8d67-e31de85498c9" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039946 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d08231-f987-45cc-ac85-683f31f6e616" containerName="registry-server" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.039960 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="620006cf-5c3f-457c-a416-30384cf951ec" containerName="oauth-openshift" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.041086 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.051013 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.051706 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.052561 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.053226 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.055258 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.062465 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.063186 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.064049 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.064322 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.065507 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.065901 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.066199 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.119934 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.120095 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.120157 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-session\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.120220 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.120354 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-template-error\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.120455 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xz2p\" (UniqueName: \"kubernetes.io/projected/a9b7316f-cbc1-460c-b95c-21b515a93a47-kube-api-access-2xz2p\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.120715 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-audit-policies\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.120773 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9b7316f-cbc1-460c-b95c-21b515a93a47-audit-dir\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.123055 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-template-login\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.123209 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.123351 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.123566 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.128142 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.128360 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.129997 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.132298 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.135343 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69b55d54f6-dmxkf"] Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.147041 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229617 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229717 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229756 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229783 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229806 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229832 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-session\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229856 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229893 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-template-error\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229915 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xz2p\" (UniqueName: \"kubernetes.io/projected/a9b7316f-cbc1-460c-b95c-21b515a93a47-kube-api-access-2xz2p\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229937 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-audit-policies\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229961 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9b7316f-cbc1-460c-b95c-21b515a93a47-audit-dir\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.229982 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-template-login\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.230007 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.230034 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.231932 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9b7316f-cbc1-460c-b95c-21b515a93a47-audit-dir\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.232329 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.233378 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.233910 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-audit-policies\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.234395 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.237220 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-session\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.237299 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.238395 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-template-login\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.239068 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.239400 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.241843 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-template-error\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.241881 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.244160 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9b7316f-cbc1-460c-b95c-21b515a93a47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.252265 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xz2p\" (UniqueName: \"kubernetes.io/projected/a9b7316f-cbc1-460c-b95c-21b515a93a47-kube-api-access-2xz2p\") pod \"oauth-openshift-69b55d54f6-dmxkf\" (UID: \"a9b7316f-cbc1-460c-b95c-21b515a93a47\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.418934 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.687875 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69b55d54f6-dmxkf"] Jan 20 03:53:11 crc kubenswrapper[4898]: W0120 03:53:11.693278 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9b7316f_cbc1_460c_b95c_21b515a93a47.slice/crio-0e87af3c95150e0316780c0154284c22ff476de399ea74534a7fdfb97f443a73 WatchSource:0}: Error finding container 0e87af3c95150e0316780c0154284c22ff476de399ea74534a7fdfb97f443a73: Status 404 returned error can't find the container with id 0e87af3c95150e0316780c0154284c22ff476de399ea74534a7fdfb97f443a73 Jan 20 03:53:11 crc kubenswrapper[4898]: I0120 03:53:11.731284 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63bc00c4-5532-4daa-9e22-b7bc5424035d" path="/var/lib/kubelet/pods/63bc00c4-5532-4daa-9e22-b7bc5424035d/volumes" Jan 20 03:53:12 crc kubenswrapper[4898]: I0120 03:53:12.398045 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" event={"ID":"a9b7316f-cbc1-460c-b95c-21b515a93a47","Type":"ContainerStarted","Data":"aa38354bb35b41a28ad02105e982afdbaa29cb95277000674e70bef1a51aac50"} Jan 20 03:53:12 crc kubenswrapper[4898]: I0120 03:53:12.398118 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" event={"ID":"a9b7316f-cbc1-460c-b95c-21b515a93a47","Type":"ContainerStarted","Data":"0e87af3c95150e0316780c0154284c22ff476de399ea74534a7fdfb97f443a73"} Jan 20 03:53:12 crc kubenswrapper[4898]: I0120 03:53:12.398383 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:12 crc kubenswrapper[4898]: I0120 03:53:12.428808 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" podStartSLOduration=33.428788951 podStartE2EDuration="33.428788951s" podCreationTimestamp="2026-01-20 03:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:53:12.425191113 +0000 UTC m=+239.024978972" watchObservedRunningTime="2026-01-20 03:53:12.428788951 +0000 UTC m=+239.028576820" Jan 20 03:53:12 crc kubenswrapper[4898]: I0120 03:53:12.521584 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-69b55d54f6-dmxkf" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.223520 4898 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.226177 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.228938 4898 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.229477 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20" gracePeriod=15 Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.229534 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402" gracePeriod=15 Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.229619 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b" gracePeriod=15 Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.229669 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be" gracePeriod=15 Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.229755 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222" gracePeriod=15 Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231344 4898 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.231594 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231608 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.231619 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231637 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.231646 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231651 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.231660 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231665 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.231677 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231682 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.231689 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231695 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.231708 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231714 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231832 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231843 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231852 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231860 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.231867 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.232059 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.292030 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.365261 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.365325 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.365374 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.365571 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.365776 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.365864 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.366000 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.366051 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467384 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467525 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467559 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467587 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467712 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467739 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467787 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467863 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467888 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467932 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467946 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.467981 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.468042 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.468069 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.468205 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.468238 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.488628 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.490571 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.491986 4898 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402" exitCode=0 Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.492030 4898 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222" exitCode=0 Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.492042 4898 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b" exitCode=0 Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.492059 4898 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be" exitCode=2 Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.492106 4898 scope.go:117] "RemoveContainer" containerID="21c49e7e4df8536cb6a6a848467dbc3b4b18824f2ff86292b394b2d6c07686ad" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.494799 4898 generic.go:334] "Generic (PLEG): container finished" podID="9571428a-14e5-47b1-b963-13f4a5bdfaba" containerID="0fad52a88eb7f7a2ad017c176ef95b61b8c7442f2a0397bc2919147587528091" exitCode=0 Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.494864 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9571428a-14e5-47b1-b963-13f4a5bdfaba","Type":"ContainerDied","Data":"0fad52a88eb7f7a2ad017c176ef95b61b8c7442f2a0397bc2919147587528091"} Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.496204 4898 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.496843 4898 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.497371 4898 status_manager.go:851] "Failed to get status for pod" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.529842 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod9571428a_14e5_47b1_b963_13f4a5bdfaba.slice/crio-conmon-0fad52a88eb7f7a2ad017c176ef95b61b8c7442f2a0397bc2919147587528091.scope\": RecentStats: unable to find data in memory cache]" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.589298 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:53:26 crc kubenswrapper[4898]: W0120 03:53:26.608686 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-6fe11234e85c3d1733634e67b6d4f34d526978c8b84b381427870ce6d2794398 WatchSource:0}: Error finding container 6fe11234e85c3d1733634e67b6d4f34d526978c8b84b381427870ce6d2794398: Status 404 returned error can't find the container with id 6fe11234e85c3d1733634e67b6d4f34d526978c8b84b381427870ce6d2794398 Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.611906 4898 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c5415e1ed18fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 03:53:26.61122073 +0000 UTC m=+253.211008599,LastTimestamp:2026-01-20 03:53:26.61122073 +0000 UTC m=+253.211008599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.972025 4898 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.973248 4898 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.973694 4898 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.974336 4898 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.974811 4898 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:26 crc kubenswrapper[4898]: I0120 03:53:26.974873 4898 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 20 03:53:26 crc kubenswrapper[4898]: E0120 03:53:26.975272 4898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Jan 20 03:53:27 crc kubenswrapper[4898]: E0120 03:53:27.176424 4898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Jan 20 03:53:27 crc kubenswrapper[4898]: I0120 03:53:27.514968 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90"} Jan 20 03:53:27 crc kubenswrapper[4898]: I0120 03:53:27.515103 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6fe11234e85c3d1733634e67b6d4f34d526978c8b84b381427870ce6d2794398"} Jan 20 03:53:27 crc kubenswrapper[4898]: I0120 03:53:27.516117 4898 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:27 crc kubenswrapper[4898]: I0120 03:53:27.516949 4898 status_manager.go:851] "Failed to get status for pod" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:27 crc kubenswrapper[4898]: I0120 03:53:27.517779 4898 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:27 crc kubenswrapper[4898]: I0120 03:53:27.526057 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 03:53:27 crc kubenswrapper[4898]: E0120 03:53:27.577526 4898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Jan 20 03:53:27 crc kubenswrapper[4898]: I0120 03:53:27.877151 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:53:27 crc kubenswrapper[4898]: I0120 03:53:27.878150 4898 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:27 crc kubenswrapper[4898]: I0120 03:53:27.878883 4898 status_manager.go:851] "Failed to get status for pod" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.008377 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9571428a-14e5-47b1-b963-13f4a5bdfaba-kube-api-access\") pod \"9571428a-14e5-47b1-b963-13f4a5bdfaba\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.008518 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-var-lock\") pod \"9571428a-14e5-47b1-b963-13f4a5bdfaba\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.008608 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-var-lock" (OuterVolumeSpecName: "var-lock") pod "9571428a-14e5-47b1-b963-13f4a5bdfaba" (UID: "9571428a-14e5-47b1-b963-13f4a5bdfaba"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.008686 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-kubelet-dir\") pod \"9571428a-14e5-47b1-b963-13f4a5bdfaba\" (UID: \"9571428a-14e5-47b1-b963-13f4a5bdfaba\") " Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.008755 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9571428a-14e5-47b1-b963-13f4a5bdfaba" (UID: "9571428a-14e5-47b1-b963-13f4a5bdfaba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.009045 4898 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.009060 4898 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9571428a-14e5-47b1-b963-13f4a5bdfaba-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.019713 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9571428a-14e5-47b1-b963-13f4a5bdfaba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9571428a-14e5-47b1-b963-13f4a5bdfaba" (UID: "9571428a-14e5-47b1-b963-13f4a5bdfaba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.110814 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9571428a-14e5-47b1-b963-13f4a5bdfaba-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:28 crc kubenswrapper[4898]: E0120 03:53:28.379567 4898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.535974 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9571428a-14e5-47b1-b963-13f4a5bdfaba","Type":"ContainerDied","Data":"d01a14a65cb7dabbe7199012600aef288bec68564f07fc8defa0a72bd4b46c56"} Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.536338 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d01a14a65cb7dabbe7199012600aef288bec68564f07fc8defa0a72bd4b46c56" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.535997 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.617787 4898 status_manager.go:851] "Failed to get status for pod" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.618080 4898 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.622817 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.623853 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.624589 4898 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.624871 4898 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.625680 4898 status_manager.go:851] "Failed to get status for pod" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.720928 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.721038 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.721060 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.721111 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.721201 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.721219 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.721588 4898 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.721610 4898 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:28 crc kubenswrapper[4898]: I0120 03:53:28.721645 4898 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.567625 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.569674 4898 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20" exitCode=0 Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.569770 4898 scope.go:117] "RemoveContainer" containerID="9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.569971 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.591253 4898 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.592114 4898 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.592607 4898 status_manager.go:851] "Failed to get status for pod" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.599374 4898 scope.go:117] "RemoveContainer" containerID="1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.625472 4898 scope.go:117] "RemoveContainer" containerID="047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.646172 4898 scope.go:117] "RemoveContainer" containerID="ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.667209 4898 scope.go:117] "RemoveContainer" containerID="82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.688889 4898 scope.go:117] "RemoveContainer" containerID="3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.718183 4898 scope.go:117] "RemoveContainer" containerID="9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402" Jan 20 03:53:29 crc kubenswrapper[4898]: E0120 03:53:29.719326 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\": container with ID starting with 9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402 not found: ID does not exist" containerID="9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.719363 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402"} err="failed to get container status \"9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\": rpc error: code = NotFound desc = could not find container \"9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402\": container with ID starting with 9243b75d8ee8fb9f2628f214af7938752d813ad275cb80c942eb587cc173e402 not found: ID does not exist" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.719386 4898 scope.go:117] "RemoveContainer" containerID="1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222" Jan 20 03:53:29 crc kubenswrapper[4898]: E0120 03:53:29.719902 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\": container with ID starting with 1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222 not found: ID does not exist" containerID="1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.720066 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222"} err="failed to get container status \"1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\": rpc error: code = NotFound desc = could not find container \"1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222\": container with ID starting with 1879a4ca428bfb1012e2d0bb336d15ef807a18578ad977c2abc0ddba0d99d222 not found: ID does not exist" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.720856 4898 scope.go:117] "RemoveContainer" containerID="047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b" Jan 20 03:53:29 crc kubenswrapper[4898]: E0120 03:53:29.722137 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\": container with ID starting with 047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b not found: ID does not exist" containerID="047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.722213 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b"} err="failed to get container status \"047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\": rpc error: code = NotFound desc = could not find container \"047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b\": container with ID starting with 047f919a82e402c79fc6537faef9a69be6e953ba102993d6bbbd30da2d82196b not found: ID does not exist" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.722258 4898 scope.go:117] "RemoveContainer" containerID="ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be" Jan 20 03:53:29 crc kubenswrapper[4898]: E0120 03:53:29.722960 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\": container with ID starting with ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be not found: ID does not exist" containerID="ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.723038 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be"} err="failed to get container status \"ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\": rpc error: code = NotFound desc = could not find container \"ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be\": container with ID starting with ea70222db680070ef11aac184b5309ade0d6125aeef5e333b453a638ee5b39be not found: ID does not exist" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.723089 4898 scope.go:117] "RemoveContainer" containerID="82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20" Jan 20 03:53:29 crc kubenswrapper[4898]: E0120 03:53:29.723492 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\": container with ID starting with 82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20 not found: ID does not exist" containerID="82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.723535 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20"} err="failed to get container status \"82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\": rpc error: code = NotFound desc = could not find container \"82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20\": container with ID starting with 82dd4bbf4cf6561f4f0a20bd8aae2dcbffcb1bed750147f3d74a31c0749dfc20 not found: ID does not exist" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.723554 4898 scope.go:117] "RemoveContainer" containerID="3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47" Jan 20 03:53:29 crc kubenswrapper[4898]: E0120 03:53:29.725570 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\": container with ID starting with 3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47 not found: ID does not exist" containerID="3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.725607 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47"} err="failed to get container status \"3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\": rpc error: code = NotFound desc = could not find container \"3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47\": container with ID starting with 3495bcee5aa52227b773534fe4c5619785af128f2588cf0b9b7afea58e3dec47 not found: ID does not exist" Jan 20 03:53:29 crc kubenswrapper[4898]: I0120 03:53:29.730752 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 20 03:53:29 crc kubenswrapper[4898]: E0120 03:53:29.980412 4898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="3.2s" Jan 20 03:53:32 crc kubenswrapper[4898]: E0120 03:53:32.158112 4898 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c5415e1ed18fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 03:53:26.61122073 +0000 UTC m=+253.211008599,LastTimestamp:2026-01-20 03:53:26.61122073 +0000 UTC m=+253.211008599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 03:53:33 crc kubenswrapper[4898]: E0120 03:53:33.182269 4898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="6.4s" Jan 20 03:53:33 crc kubenswrapper[4898]: I0120 03:53:33.727719 4898 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:33 crc kubenswrapper[4898]: I0120 03:53:33.728229 4898 status_manager.go:851] "Failed to get status for pod" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:37 crc kubenswrapper[4898]: I0120 03:53:37.725013 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:37 crc kubenswrapper[4898]: I0120 03:53:37.727658 4898 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:37 crc kubenswrapper[4898]: I0120 03:53:37.728134 4898 status_manager.go:851] "Failed to get status for pod" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:37 crc kubenswrapper[4898]: I0120 03:53:37.752376 4898 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4971570e-0291-414d-8c26-d8e99cf4e978" Jan 20 03:53:37 crc kubenswrapper[4898]: I0120 03:53:37.752509 4898 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4971570e-0291-414d-8c26-d8e99cf4e978" Jan 20 03:53:37 crc kubenswrapper[4898]: E0120 03:53:37.753494 4898 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:37 crc kubenswrapper[4898]: I0120 03:53:37.754402 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:38 crc kubenswrapper[4898]: I0120 03:53:38.635711 4898 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b3bd37ff8a149e73c1eeade389ffa0c03d2cc86514a7947f1de3f6ed61dd4001" exitCode=0 Jan 20 03:53:38 crc kubenswrapper[4898]: I0120 03:53:38.635780 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b3bd37ff8a149e73c1eeade389ffa0c03d2cc86514a7947f1de3f6ed61dd4001"} Jan 20 03:53:38 crc kubenswrapper[4898]: I0120 03:53:38.636170 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f22fbc8d8090319678d27d29e78ac2cf0458112ac1e8932b46a651be0397ef9a"} Jan 20 03:53:38 crc kubenswrapper[4898]: I0120 03:53:38.636455 4898 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4971570e-0291-414d-8c26-d8e99cf4e978" Jan 20 03:53:38 crc kubenswrapper[4898]: I0120 03:53:38.636468 4898 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4971570e-0291-414d-8c26-d8e99cf4e978" Jan 20 03:53:38 crc kubenswrapper[4898]: E0120 03:53:38.636917 4898 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:38 crc kubenswrapper[4898]: I0120 03:53:38.637214 4898 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:38 crc kubenswrapper[4898]: I0120 03:53:38.638216 4898 status_manager.go:851] "Failed to get status for pod" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 20 03:53:39 crc kubenswrapper[4898]: I0120 03:53:39.664574 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bd9e8417d7ea6e7de427dcdbe16b592d90acc5aa3b0f4f231eccc7721a3065cb"} Jan 20 03:53:39 crc kubenswrapper[4898]: I0120 03:53:39.664919 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b46d882e12c032b024e190b2e315fd7a49861b0b35d81cb8f17586d9876d8aca"} Jan 20 03:53:39 crc kubenswrapper[4898]: I0120 03:53:39.664935 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c31078e595fa4e803724196b1685a1f35ba19f24e415ff58e7189e61c9fbddf6"} Jan 20 03:53:40 crc kubenswrapper[4898]: I0120 03:53:40.671375 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 03:53:40 crc kubenswrapper[4898]: I0120 03:53:40.671624 4898 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc" exitCode=1 Jan 20 03:53:40 crc kubenswrapper[4898]: I0120 03:53:40.671673 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc"} Jan 20 03:53:40 crc kubenswrapper[4898]: I0120 03:53:40.672079 4898 scope.go:117] "RemoveContainer" containerID="e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc" Jan 20 03:53:40 crc kubenswrapper[4898]: I0120 03:53:40.677864 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0b2016b7f5220d1788f2395923a108d3c433af1e9ef9a1e0dac0df182965b91b"} Jan 20 03:53:40 crc kubenswrapper[4898]: I0120 03:53:40.677900 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5eeb70750e2201908f4ccbd5566ab87a63dbda477a604b931ef880cc2844b4ee"} Jan 20 03:53:40 crc kubenswrapper[4898]: I0120 03:53:40.678021 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:40 crc kubenswrapper[4898]: I0120 03:53:40.678100 4898 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4971570e-0291-414d-8c26-d8e99cf4e978" Jan 20 03:53:40 crc kubenswrapper[4898]: I0120 03:53:40.678121 4898 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4971570e-0291-414d-8c26-d8e99cf4e978" Jan 20 03:53:41 crc kubenswrapper[4898]: I0120 03:53:41.692178 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 03:53:41 crc kubenswrapper[4898]: I0120 03:53:41.692766 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"df7c26abfd9812594732284c47f46e7e1d11b6ed4f4f7580a5c7e295fa59068e"} Jan 20 03:53:42 crc kubenswrapper[4898]: I0120 03:53:42.618060 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:53:42 crc kubenswrapper[4898]: I0120 03:53:42.755463 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:42 crc kubenswrapper[4898]: I0120 03:53:42.755723 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:42 crc kubenswrapper[4898]: I0120 03:53:42.764634 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:42 crc kubenswrapper[4898]: I0120 03:53:42.767586 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:53:42 crc kubenswrapper[4898]: I0120 03:53:42.770766 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 03:53:42 crc kubenswrapper[4898]: I0120 03:53:42.780873 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:53:42 crc kubenswrapper[4898]: I0120 03:53:42.861858 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 03:53:43 crc kubenswrapper[4898]: W0120 03:53:43.381305 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-c20d02b22c93e4e43045e1a10353e4d19779360e8be99838462e8099f9a9b93d WatchSource:0}: Error finding container c20d02b22c93e4e43045e1a10353e4d19779360e8be99838462e8099f9a9b93d: Status 404 returned error can't find the container with id c20d02b22c93e4e43045e1a10353e4d19779360e8be99838462e8099f9a9b93d Jan 20 03:53:43 crc kubenswrapper[4898]: I0120 03:53:43.512505 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:53:43 crc kubenswrapper[4898]: I0120 03:53:43.512847 4898 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 03:53:43 crc kubenswrapper[4898]: I0120 03:53:43.512911 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 03:53:43 crc kubenswrapper[4898]: I0120 03:53:43.710140 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"80559ed25670a06307c220933879adb845025a89b03c755df33b87f25acc01ca"} Jan 20 03:53:43 crc kubenswrapper[4898]: I0120 03:53:43.710489 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c20d02b22c93e4e43045e1a10353e4d19779360e8be99838462e8099f9a9b93d"} Jan 20 03:53:45 crc kubenswrapper[4898]: I0120 03:53:45.688515 4898 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:45 crc kubenswrapper[4898]: I0120 03:53:45.731420 4898 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4971570e-0291-414d-8c26-d8e99cf4e978" Jan 20 03:53:45 crc kubenswrapper[4898]: I0120 03:53:45.731464 4898 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4971570e-0291-414d-8c26-d8e99cf4e978" Jan 20 03:53:45 crc kubenswrapper[4898]: I0120 03:53:45.737172 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:53:45 crc kubenswrapper[4898]: I0120 03:53:45.754558 4898 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="dbc4304e-02df-4f4b-9df7-4b5ec56a9066" Jan 20 03:53:46 crc kubenswrapper[4898]: I0120 03:53:46.736929 4898 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4971570e-0291-414d-8c26-d8e99cf4e978" Jan 20 03:53:46 crc kubenswrapper[4898]: I0120 03:53:46.736977 4898 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4971570e-0291-414d-8c26-d8e99cf4e978" Jan 20 03:53:46 crc kubenswrapper[4898]: I0120 03:53:46.740316 4898 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="dbc4304e-02df-4f4b-9df7-4b5ec56a9066" Jan 20 03:53:53 crc kubenswrapper[4898]: I0120 03:53:53.512625 4898 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 03:53:53 crc kubenswrapper[4898]: I0120 03:53:53.513007 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 03:53:53 crc kubenswrapper[4898]: I0120 03:53:53.587371 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 03:53:56 crc kubenswrapper[4898]: I0120 03:53:56.461326 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 03:53:56 crc kubenswrapper[4898]: I0120 03:53:56.671082 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 03:53:56 crc kubenswrapper[4898]: I0120 03:53:56.713379 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 03:53:57 crc kubenswrapper[4898]: I0120 03:53:57.306190 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 03:53:57 crc kubenswrapper[4898]: I0120 03:53:57.536994 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 03:53:57 crc kubenswrapper[4898]: I0120 03:53:57.615998 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 03:53:57 crc kubenswrapper[4898]: I0120 03:53:57.846649 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 03:53:58 crc kubenswrapper[4898]: I0120 03:53:58.437560 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 03:53:58 crc kubenswrapper[4898]: I0120 03:53:58.645545 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 03:53:58 crc kubenswrapper[4898]: I0120 03:53:58.755262 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 03:53:58 crc kubenswrapper[4898]: I0120 03:53:58.919646 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.160847 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.162573 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.255118 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.263229 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.271954 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.275850 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.441884 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.451876 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.468129 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.513024 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.640683 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.685936 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.823845 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 03:53:59 crc kubenswrapper[4898]: I0120 03:53:59.960699 4898 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.048291 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.060297 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.092563 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.117688 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.150138 4898 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.157091 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=34.157066138 podStartE2EDuration="34.157066138s" podCreationTimestamp="2026-01-20 03:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:53:45.718125142 +0000 UTC m=+272.317913021" watchObservedRunningTime="2026-01-20 03:54:00.157066138 +0000 UTC m=+286.756854037" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.160206 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.160297 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.168684 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.193414 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.193373848 podStartE2EDuration="15.193373848s" podCreationTimestamp="2026-01-20 03:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:54:00.186422258 +0000 UTC m=+286.786210117" watchObservedRunningTime="2026-01-20 03:54:00.193373848 +0000 UTC m=+286.793161747" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.225140 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.341890 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.426782 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.600397 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.626024 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.648312 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.653633 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.684068 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.776336 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.819203 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.911213 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 03:54:00 crc kubenswrapper[4898]: I0120 03:54:00.954785 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.103708 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.147307 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.181578 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.281840 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.365065 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.429923 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.526365 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.596629 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.777699 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.780665 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.796892 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.823530 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.899639 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 03:54:01 crc kubenswrapper[4898]: I0120 03:54:01.940923 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.087849 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.101403 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.106152 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.117918 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.128249 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.146921 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.188633 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.227417 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.228866 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.265850 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.338812 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.355375 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.376287 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.421304 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.424543 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.646801 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.659019 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.660322 4898 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.662156 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.684040 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.743395 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.780657 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.793163 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.807339 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.893884 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.968829 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.989151 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 03:54:02 crc kubenswrapper[4898]: I0120 03:54:02.994173 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.022808 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.031796 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.070426 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.173832 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.208783 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.266127 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.351320 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.362211 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.451803 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.512281 4898 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.512341 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.512399 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.513107 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"df7c26abfd9812594732284c47f46e7e1d11b6ed4f4f7580a5c7e295fa59068e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.513236 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://df7c26abfd9812594732284c47f46e7e1d11b6ed4f4f7580a5c7e295fa59068e" gracePeriod=30 Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.555705 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.626940 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.639760 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.781827 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.806914 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.837225 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.838661 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.851208 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.853186 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.853263 4898 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="df7c26abfd9812594732284c47f46e7e1d11b6ed4f4f7580a5c7e295fa59068e" exitCode=2 Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.853329 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"df7c26abfd9812594732284c47f46e7e1d11b6ed4f4f7580a5c7e295fa59068e"} Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.853583 4898 scope.go:117] "RemoveContainer" containerID="e71575629c6a72382e94edd6cc8c8e9cc7b85dc28b43b1c4d85c47ecff3377fc" Jan 20 03:54:03 crc kubenswrapper[4898]: I0120 03:54:03.971059 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.115335 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.132475 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.227320 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.238616 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.290671 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.298210 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.349230 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.364752 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.418398 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.431270 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.435234 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.449220 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.519067 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.787302 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.862359 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.863776 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fd8b9c7d077577313b1c6c857eafcee2a15c80181fef4dd68678c8023b38aaa5"} Jan 20 03:54:04 crc kubenswrapper[4898]: I0120 03:54:04.869329 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.083835 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.098177 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.238824 4898 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.240698 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.256161 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.258220 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.422234 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.489820 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.530111 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.580131 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.836885 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.865353 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.868737 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.940381 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 03:54:05 crc kubenswrapper[4898]: I0120 03:54:05.981180 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.017285 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.049865 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.051620 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.148319 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.287416 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.457504 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.496957 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.516303 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.720255 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.732686 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.759103 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.782417 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.828566 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.847323 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.851889 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.878022 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.965483 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.971794 4898 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 03:54:06 crc kubenswrapper[4898]: I0120 03:54:06.997548 4898 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.108404 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.255031 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.403084 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.408665 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.558083 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.567321 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.587228 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.593766 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.621132 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.704827 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.735268 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.737377 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.741817 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.777473 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.781126 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.791957 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.823682 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.830579 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.851297 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.860143 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.972463 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 03:54:07 crc kubenswrapper[4898]: I0120 03:54:07.996741 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.062758 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.125076 4898 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.125940 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90" gracePeriod=5 Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.213106 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.218847 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.280023 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.294464 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.310696 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.366414 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.424349 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.515281 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.611003 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.614582 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.659260 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.693837 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.744878 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.795100 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.840083 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.872027 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.917191 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.936495 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.967540 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 03:54:08 crc kubenswrapper[4898]: I0120 03:54:08.999934 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.103801 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.139946 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.140841 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.206111 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.265588 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.273541 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.299588 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.537696 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.548060 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.582411 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.671118 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.711985 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.764244 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 03:54:09 crc kubenswrapper[4898]: I0120 03:54:09.904112 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.021556 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.024879 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.043052 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.099759 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.102381 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.131309 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.313680 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.331675 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.423409 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.546755 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.861573 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 03:54:10 crc kubenswrapper[4898]: I0120 03:54:10.979925 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.046650 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.094906 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.096498 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.166391 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.228791 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.235501 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.315939 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.335968 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.396198 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.398123 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.455844 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.472395 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.494262 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.507728 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.803225 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 03:54:11 crc kubenswrapper[4898]: I0120 03:54:11.890034 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 03:54:12 crc kubenswrapper[4898]: I0120 03:54:12.118943 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 03:54:12 crc kubenswrapper[4898]: I0120 03:54:12.262987 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 03:54:12 crc kubenswrapper[4898]: I0120 03:54:12.573275 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 03:54:12 crc kubenswrapper[4898]: I0120 03:54:12.617665 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:54:12 crc kubenswrapper[4898]: I0120 03:54:12.628527 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 03:54:12 crc kubenswrapper[4898]: I0120 03:54:12.672353 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 03:54:12 crc kubenswrapper[4898]: I0120 03:54:12.727184 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 03:54:12 crc kubenswrapper[4898]: I0120 03:54:12.865146 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.004354 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.043296 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.254219 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.435882 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.512685 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.520515 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.549012 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.550957 4898 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.676338 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.748682 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.748770 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.771370 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.796856 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.823361 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.823454 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.823482 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.823513 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.823535 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.823495 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.823586 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.823635 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.823546 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.823991 4898 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.824027 4898 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.824049 4898 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.824068 4898 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.834374 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.925507 4898 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.936822 4898 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90" exitCode=137 Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.936940 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.937058 4898 scope.go:117] "RemoveContainer" containerID="11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.941918 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.961271 4898 scope.go:117] "RemoveContainer" containerID="11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90" Jan 20 03:54:13 crc kubenswrapper[4898]: E0120 03:54:13.961754 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90\": container with ID starting with 11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90 not found: ID does not exist" containerID="11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.961851 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90"} err="failed to get container status \"11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90\": rpc error: code = NotFound desc = could not find container \"11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90\": container with ID starting with 11a1a91f53e270b62daf0f851f6e122be08019f0bb272c1fef25bf354b403b90 not found: ID does not exist" Jan 20 03:54:13 crc kubenswrapper[4898]: I0120 03:54:13.994087 4898 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="9b29476e-b07e-47e2-b8ba-087cebf7e07b" Jan 20 03:54:15 crc kubenswrapper[4898]: I0120 03:54:15.732169 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 20 03:54:15 crc kubenswrapper[4898]: I0120 03:54:15.732971 4898 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 20 03:54:15 crc kubenswrapper[4898]: I0120 03:54:15.742001 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 03:54:15 crc kubenswrapper[4898]: I0120 03:54:15.742045 4898 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="9b29476e-b07e-47e2-b8ba-087cebf7e07b" Jan 20 03:54:15 crc kubenswrapper[4898]: I0120 03:54:15.745986 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 03:54:15 crc kubenswrapper[4898]: I0120 03:54:15.746008 4898 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="9b29476e-b07e-47e2-b8ba-087cebf7e07b" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.088906 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bvccs"] Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.089749 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bvccs" podUID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" containerName="registry-server" containerID="cri-o://3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4" gracePeriod=30 Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.116647 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k45j4"] Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.117125 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k45j4" podUID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" containerName="registry-server" containerID="cri-o://f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394" gracePeriod=30 Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.127216 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bsjcr"] Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.127670 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" podUID="8a0b7e05-ef31-426e-989f-a6ad6c710150" containerName="marketplace-operator" containerID="cri-o://3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1" gracePeriod=30 Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.141585 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgrfj"] Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.141987 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qgrfj" podUID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerName="registry-server" containerID="cri-o://35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7" gracePeriod=30 Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.152626 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9m8f"] Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.154153 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f9m8f" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerName="registry-server" containerID="cri-o://fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b" gracePeriod=30 Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.537029 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.586532 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.588857 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjb9d\" (UniqueName: \"kubernetes.io/projected/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-kube-api-access-kjb9d\") pod \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.589016 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p4k7\" (UniqueName: \"kubernetes.io/projected/f0c2a49d-68aa-428a-87dd-fc3cddb41040-kube-api-access-4p4k7\") pod \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.589036 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-catalog-content\") pod \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.589055 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-utilities\") pod \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\" (UID: \"f0c2a49d-68aa-428a-87dd-fc3cddb41040\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.589096 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-utilities\") pod \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.589111 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-catalog-content\") pod \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\" (UID: \"2eab3b38-2b5e-4ab8-8660-a45f19b1d329\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.590095 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-utilities" (OuterVolumeSpecName: "utilities") pod "f0c2a49d-68aa-428a-87dd-fc3cddb41040" (UID: "f0c2a49d-68aa-428a-87dd-fc3cddb41040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.590384 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-utilities" (OuterVolumeSpecName: "utilities") pod "2eab3b38-2b5e-4ab8-8660-a45f19b1d329" (UID: "2eab3b38-2b5e-4ab8-8660-a45f19b1d329"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.596567 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c2a49d-68aa-428a-87dd-fc3cddb41040-kube-api-access-4p4k7" (OuterVolumeSpecName: "kube-api-access-4p4k7") pod "f0c2a49d-68aa-428a-87dd-fc3cddb41040" (UID: "f0c2a49d-68aa-428a-87dd-fc3cddb41040"). InnerVolumeSpecName "kube-api-access-4p4k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.596808 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-kube-api-access-kjb9d" (OuterVolumeSpecName: "kube-api-access-kjb9d") pod "2eab3b38-2b5e-4ab8-8660-a45f19b1d329" (UID: "2eab3b38-2b5e-4ab8-8660-a45f19b1d329"). InnerVolumeSpecName "kube-api-access-kjb9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.600113 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.610333 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.617209 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.649573 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2eab3b38-2b5e-4ab8-8660-a45f19b1d329" (UID: "2eab3b38-2b5e-4ab8-8660-a45f19b1d329"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.690193 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk7bt\" (UniqueName: \"kubernetes.io/projected/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-kube-api-access-sk7bt\") pod \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.690532 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-trusted-ca\") pod \"8a0b7e05-ef31-426e-989f-a6ad6c710150\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.690750 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-utilities\") pod \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.690906 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-operator-metrics\") pod \"8a0b7e05-ef31-426e-989f-a6ad6c710150\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.691057 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7fcb\" (UniqueName: \"kubernetes.io/projected/8a0b7e05-ef31-426e-989f-a6ad6c710150-kube-api-access-v7fcb\") pod \"8a0b7e05-ef31-426e-989f-a6ad6c710150\" (UID: \"8a0b7e05-ef31-426e-989f-a6ad6c710150\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.691216 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-utilities\") pod \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.691389 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rhf7\" (UniqueName: \"kubernetes.io/projected/1b568186-dd5f-4340-9b0b-f083bf37a1b5-kube-api-access-9rhf7\") pod \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.691625 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-catalog-content\") pod \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\" (UID: \"70d524b5-855e-4dda-aaa8-5ae9463e7b3c\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.691792 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-catalog-content\") pod \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\" (UID: \"1b568186-dd5f-4340-9b0b-f083bf37a1b5\") " Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.691525 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8a0b7e05-ef31-426e-989f-a6ad6c710150" (UID: "8a0b7e05-ef31-426e-989f-a6ad6c710150"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.691984 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-utilities" (OuterVolumeSpecName: "utilities") pod "1b568186-dd5f-4340-9b0b-f083bf37a1b5" (UID: "1b568186-dd5f-4340-9b0b-f083bf37a1b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.692376 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p4k7\" (UniqueName: \"kubernetes.io/projected/f0c2a49d-68aa-428a-87dd-fc3cddb41040-kube-api-access-4p4k7\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.692408 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.692421 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.692445 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.692457 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjb9d\" (UniqueName: \"kubernetes.io/projected/2eab3b38-2b5e-4ab8-8660-a45f19b1d329-kube-api-access-kjb9d\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.700610 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-utilities" (OuterVolumeSpecName: "utilities") pod "70d524b5-855e-4dda-aaa8-5ae9463e7b3c" (UID: "70d524b5-855e-4dda-aaa8-5ae9463e7b3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.702572 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8a0b7e05-ef31-426e-989f-a6ad6c710150" (UID: "8a0b7e05-ef31-426e-989f-a6ad6c710150"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.705700 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b568186-dd5f-4340-9b0b-f083bf37a1b5-kube-api-access-9rhf7" (OuterVolumeSpecName: "kube-api-access-9rhf7") pod "1b568186-dd5f-4340-9b0b-f083bf37a1b5" (UID: "1b568186-dd5f-4340-9b0b-f083bf37a1b5"). InnerVolumeSpecName "kube-api-access-9rhf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.705820 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0b7e05-ef31-426e-989f-a6ad6c710150-kube-api-access-v7fcb" (OuterVolumeSpecName: "kube-api-access-v7fcb") pod "8a0b7e05-ef31-426e-989f-a6ad6c710150" (UID: "8a0b7e05-ef31-426e-989f-a6ad6c710150"). InnerVolumeSpecName "kube-api-access-v7fcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.710989 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-kube-api-access-sk7bt" (OuterVolumeSpecName: "kube-api-access-sk7bt") pod "70d524b5-855e-4dda-aaa8-5ae9463e7b3c" (UID: "70d524b5-855e-4dda-aaa8-5ae9463e7b3c"). InnerVolumeSpecName "kube-api-access-sk7bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.736279 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70d524b5-855e-4dda-aaa8-5ae9463e7b3c" (UID: "70d524b5-855e-4dda-aaa8-5ae9463e7b3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.736601 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0c2a49d-68aa-428a-87dd-fc3cddb41040" (UID: "f0c2a49d-68aa-428a-87dd-fc3cddb41040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.741104 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b568186-dd5f-4340-9b0b-f083bf37a1b5" (UID: "1b568186-dd5f-4340-9b0b-f083bf37a1b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.793819 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.793847 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c2a49d-68aa-428a-87dd-fc3cddb41040-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.793860 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk7bt\" (UniqueName: \"kubernetes.io/projected/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-kube-api-access-sk7bt\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.793873 4898 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.793882 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b568186-dd5f-4340-9b0b-f083bf37a1b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.793892 4898 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a0b7e05-ef31-426e-989f-a6ad6c710150-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.793901 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7fcb\" (UniqueName: \"kubernetes.io/projected/8a0b7e05-ef31-426e-989f-a6ad6c710150-kube-api-access-v7fcb\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.793910 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.793919 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rhf7\" (UniqueName: \"kubernetes.io/projected/1b568186-dd5f-4340-9b0b-f083bf37a1b5-kube-api-access-9rhf7\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.793927 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d524b5-855e-4dda-aaa8-5ae9463e7b3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.972065 4898 generic.go:334] "Generic (PLEG): container finished" podID="8a0b7e05-ef31-426e-989f-a6ad6c710150" containerID="3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1" exitCode=0 Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.972138 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.972176 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" event={"ID":"8a0b7e05-ef31-426e-989f-a6ad6c710150","Type":"ContainerDied","Data":"3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1"} Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.972377 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bsjcr" event={"ID":"8a0b7e05-ef31-426e-989f-a6ad6c710150","Type":"ContainerDied","Data":"1b4e93131c24ebce932ea872461966c226feb328a1c4fce8d964c76a9e177da8"} Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.972398 4898 scope.go:117] "RemoveContainer" containerID="3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.977392 4898 generic.go:334] "Generic (PLEG): container finished" podID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerID="35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7" exitCode=0 Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.977571 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgrfj" event={"ID":"70d524b5-855e-4dda-aaa8-5ae9463e7b3c","Type":"ContainerDied","Data":"35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7"} Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.977642 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgrfj" event={"ID":"70d524b5-855e-4dda-aaa8-5ae9463e7b3c","Type":"ContainerDied","Data":"3f9391918fffd4d517d5d9a8cc29d2a79d398454ab110408f593ea080fe12bd2"} Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.977583 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgrfj" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.982418 4898 generic.go:334] "Generic (PLEG): container finished" podID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" containerID="f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394" exitCode=0 Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.982471 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k45j4" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.982475 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k45j4" event={"ID":"1b568186-dd5f-4340-9b0b-f083bf37a1b5","Type":"ContainerDied","Data":"f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394"} Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.983255 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k45j4" event={"ID":"1b568186-dd5f-4340-9b0b-f083bf37a1b5","Type":"ContainerDied","Data":"8349868cb7a6e53dd2a5b982ed37811e260f8eaabfc733d5c52b4a5dcedf824d"} Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.992538 4898 generic.go:334] "Generic (PLEG): container finished" podID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" containerID="3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4" exitCode=0 Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.992765 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvccs" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.993350 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvccs" event={"ID":"2eab3b38-2b5e-4ab8-8660-a45f19b1d329","Type":"ContainerDied","Data":"3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4"} Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.993550 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvccs" event={"ID":"2eab3b38-2b5e-4ab8-8660-a45f19b1d329","Type":"ContainerDied","Data":"908ae6b6a4976880f747a798cd3d1a368d8aa58f34a95402f8c3f8a975e7e77f"} Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.995171 4898 scope.go:117] "RemoveContainer" containerID="3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1" Jan 20 03:54:18 crc kubenswrapper[4898]: E0120 03:54:18.995415 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1\": container with ID starting with 3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1 not found: ID does not exist" containerID="3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.995456 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1"} err="failed to get container status \"3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1\": rpc error: code = NotFound desc = could not find container \"3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1\": container with ID starting with 3c99ca66732d3c682b6e3387304b2772c269162072753c8d6be9a2c3bf1fe6a1 not found: ID does not exist" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.995476 4898 scope.go:117] "RemoveContainer" containerID="35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7" Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.998106 4898 generic.go:334] "Generic (PLEG): container finished" podID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerID="fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b" exitCode=0 Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.998149 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9m8f" event={"ID":"f0c2a49d-68aa-428a-87dd-fc3cddb41040","Type":"ContainerDied","Data":"fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b"} Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.998180 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9m8f" event={"ID":"f0c2a49d-68aa-428a-87dd-fc3cddb41040","Type":"ContainerDied","Data":"7ac2dcc206880ebdc70b118b1c60c05506b88b48170b09b972d2463f7fb2eb32"} Jan 20 03:54:18 crc kubenswrapper[4898]: I0120 03:54:18.998719 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9m8f" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.026793 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bsjcr"] Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.033014 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bsjcr"] Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.040010 4898 scope.go:117] "RemoveContainer" containerID="e00ab1278b11d7cefcaee3b786572c942432fe6c48ae6eeeb080da17efbb54ac" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.046058 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgrfj"] Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.055787 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgrfj"] Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.083015 4898 scope.go:117] "RemoveContainer" containerID="8a2cb5d9e8809ccfa895be9f9a400d611da4814f620650de1408f90723a0ddbc" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.085893 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bvccs"] Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.099764 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bvccs"] Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.111847 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9m8f"] Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.118531 4898 scope.go:117] "RemoveContainer" containerID="35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.120535 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f9m8f"] Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.121236 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7\": container with ID starting with 35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7 not found: ID does not exist" containerID="35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.121280 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7"} err="failed to get container status \"35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7\": rpc error: code = NotFound desc = could not find container \"35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7\": container with ID starting with 35721b817b39ba55a5a8b4eca5bfe3e34ce4bf84fdfa82efb98b0ccbe37580b7 not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.121306 4898 scope.go:117] "RemoveContainer" containerID="e00ab1278b11d7cefcaee3b786572c942432fe6c48ae6eeeb080da17efbb54ac" Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.121637 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00ab1278b11d7cefcaee3b786572c942432fe6c48ae6eeeb080da17efbb54ac\": container with ID starting with e00ab1278b11d7cefcaee3b786572c942432fe6c48ae6eeeb080da17efbb54ac not found: ID does not exist" containerID="e00ab1278b11d7cefcaee3b786572c942432fe6c48ae6eeeb080da17efbb54ac" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.121692 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00ab1278b11d7cefcaee3b786572c942432fe6c48ae6eeeb080da17efbb54ac"} err="failed to get container status \"e00ab1278b11d7cefcaee3b786572c942432fe6c48ae6eeeb080da17efbb54ac\": rpc error: code = NotFound desc = could not find container \"e00ab1278b11d7cefcaee3b786572c942432fe6c48ae6eeeb080da17efbb54ac\": container with ID starting with e00ab1278b11d7cefcaee3b786572c942432fe6c48ae6eeeb080da17efbb54ac not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.121734 4898 scope.go:117] "RemoveContainer" containerID="8a2cb5d9e8809ccfa895be9f9a400d611da4814f620650de1408f90723a0ddbc" Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.122095 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2cb5d9e8809ccfa895be9f9a400d611da4814f620650de1408f90723a0ddbc\": container with ID starting with 8a2cb5d9e8809ccfa895be9f9a400d611da4814f620650de1408f90723a0ddbc not found: ID does not exist" containerID="8a2cb5d9e8809ccfa895be9f9a400d611da4814f620650de1408f90723a0ddbc" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.122139 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2cb5d9e8809ccfa895be9f9a400d611da4814f620650de1408f90723a0ddbc"} err="failed to get container status \"8a2cb5d9e8809ccfa895be9f9a400d611da4814f620650de1408f90723a0ddbc\": rpc error: code = NotFound desc = could not find container \"8a2cb5d9e8809ccfa895be9f9a400d611da4814f620650de1408f90723a0ddbc\": container with ID starting with 8a2cb5d9e8809ccfa895be9f9a400d611da4814f620650de1408f90723a0ddbc not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.122166 4898 scope.go:117] "RemoveContainer" containerID="f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.128420 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k45j4"] Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.136506 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k45j4"] Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.140559 4898 scope.go:117] "RemoveContainer" containerID="eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.177073 4898 scope.go:117] "RemoveContainer" containerID="d813620cbb66daac6097afc9cb5ea706dd9c384fe58e599b6d7973d8b633a28a" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.196039 4898 scope.go:117] "RemoveContainer" containerID="f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394" Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.198931 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394\": container with ID starting with f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394 not found: ID does not exist" containerID="f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.198984 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394"} err="failed to get container status \"f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394\": rpc error: code = NotFound desc = could not find container \"f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394\": container with ID starting with f93e46f3102537ee0828e9f7aca9af9422b92f5f5eac1fd20b22d64cad3e6394 not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.199018 4898 scope.go:117] "RemoveContainer" containerID="eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6" Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.199691 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6\": container with ID starting with eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6 not found: ID does not exist" containerID="eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.199737 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6"} err="failed to get container status \"eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6\": rpc error: code = NotFound desc = could not find container \"eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6\": container with ID starting with eee1e50e3b830da65e5b3d3fed78f5371d33ce6e9dd756db0f6fa259056304b6 not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.199763 4898 scope.go:117] "RemoveContainer" containerID="d813620cbb66daac6097afc9cb5ea706dd9c384fe58e599b6d7973d8b633a28a" Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.204652 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d813620cbb66daac6097afc9cb5ea706dd9c384fe58e599b6d7973d8b633a28a\": container with ID starting with d813620cbb66daac6097afc9cb5ea706dd9c384fe58e599b6d7973d8b633a28a not found: ID does not exist" containerID="d813620cbb66daac6097afc9cb5ea706dd9c384fe58e599b6d7973d8b633a28a" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.204710 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d813620cbb66daac6097afc9cb5ea706dd9c384fe58e599b6d7973d8b633a28a"} err="failed to get container status \"d813620cbb66daac6097afc9cb5ea706dd9c384fe58e599b6d7973d8b633a28a\": rpc error: code = NotFound desc = could not find container \"d813620cbb66daac6097afc9cb5ea706dd9c384fe58e599b6d7973d8b633a28a\": container with ID starting with d813620cbb66daac6097afc9cb5ea706dd9c384fe58e599b6d7973d8b633a28a not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.204744 4898 scope.go:117] "RemoveContainer" containerID="3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.225865 4898 scope.go:117] "RemoveContainer" containerID="83df61923a4a352eaea6491bd6f88a2956d771e4798e14fcba5fe61f6fcb8455" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.247581 4898 scope.go:117] "RemoveContainer" containerID="4f374f9072e325bd41ff945b7d2648739dacaf354de7f6f450473cb3c336c5be" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.288386 4898 scope.go:117] "RemoveContainer" containerID="3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4" Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.289009 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4\": container with ID starting with 3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4 not found: ID does not exist" containerID="3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.289099 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4"} err="failed to get container status \"3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4\": rpc error: code = NotFound desc = could not find container \"3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4\": container with ID starting with 3bd6f7193129ff8c4083be400129b49fbb2cd28278f6e55d0d52c6005e8810c4 not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.289169 4898 scope.go:117] "RemoveContainer" containerID="83df61923a4a352eaea6491bd6f88a2956d771e4798e14fcba5fe61f6fcb8455" Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.289835 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83df61923a4a352eaea6491bd6f88a2956d771e4798e14fcba5fe61f6fcb8455\": container with ID starting with 83df61923a4a352eaea6491bd6f88a2956d771e4798e14fcba5fe61f6fcb8455 not found: ID does not exist" containerID="83df61923a4a352eaea6491bd6f88a2956d771e4798e14fcba5fe61f6fcb8455" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.289987 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83df61923a4a352eaea6491bd6f88a2956d771e4798e14fcba5fe61f6fcb8455"} err="failed to get container status \"83df61923a4a352eaea6491bd6f88a2956d771e4798e14fcba5fe61f6fcb8455\": rpc error: code = NotFound desc = could not find container \"83df61923a4a352eaea6491bd6f88a2956d771e4798e14fcba5fe61f6fcb8455\": container with ID starting with 83df61923a4a352eaea6491bd6f88a2956d771e4798e14fcba5fe61f6fcb8455 not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.290037 4898 scope.go:117] "RemoveContainer" containerID="4f374f9072e325bd41ff945b7d2648739dacaf354de7f6f450473cb3c336c5be" Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.290718 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f374f9072e325bd41ff945b7d2648739dacaf354de7f6f450473cb3c336c5be\": container with ID starting with 4f374f9072e325bd41ff945b7d2648739dacaf354de7f6f450473cb3c336c5be not found: ID does not exist" containerID="4f374f9072e325bd41ff945b7d2648739dacaf354de7f6f450473cb3c336c5be" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.290785 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f374f9072e325bd41ff945b7d2648739dacaf354de7f6f450473cb3c336c5be"} err="failed to get container status \"4f374f9072e325bd41ff945b7d2648739dacaf354de7f6f450473cb3c336c5be\": rpc error: code = NotFound desc = could not find container \"4f374f9072e325bd41ff945b7d2648739dacaf354de7f6f450473cb3c336c5be\": container with ID starting with 4f374f9072e325bd41ff945b7d2648739dacaf354de7f6f450473cb3c336c5be not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.290828 4898 scope.go:117] "RemoveContainer" containerID="fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.309557 4898 scope.go:117] "RemoveContainer" containerID="f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.334581 4898 scope.go:117] "RemoveContainer" containerID="881f1d78b417f196189fc804ba3e9d31dcf76fba4901b4a4010e815cbee05e8a" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.355649 4898 scope.go:117] "RemoveContainer" containerID="fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b" Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.356122 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b\": container with ID starting with fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b not found: ID does not exist" containerID="fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.356212 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b"} err="failed to get container status \"fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b\": rpc error: code = NotFound desc = could not find container \"fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b\": container with ID starting with fa52c1de22927964eadd6e60e768ef75afd6976be14f1f55cae1907d1a1dd44b not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.356281 4898 scope.go:117] "RemoveContainer" containerID="f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7" Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.356985 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7\": container with ID starting with f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7 not found: ID does not exist" containerID="f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.357056 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7"} err="failed to get container status \"f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7\": rpc error: code = NotFound desc = could not find container \"f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7\": container with ID starting with f869c8c83504341ee192c1e05a94e5db783b8fc571ce0c986044055ee881bda7 not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.357083 4898 scope.go:117] "RemoveContainer" containerID="881f1d78b417f196189fc804ba3e9d31dcf76fba4901b4a4010e815cbee05e8a" Jan 20 03:54:19 crc kubenswrapper[4898]: E0120 03:54:19.357834 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"881f1d78b417f196189fc804ba3e9d31dcf76fba4901b4a4010e815cbee05e8a\": container with ID starting with 881f1d78b417f196189fc804ba3e9d31dcf76fba4901b4a4010e815cbee05e8a not found: ID does not exist" containerID="881f1d78b417f196189fc804ba3e9d31dcf76fba4901b4a4010e815cbee05e8a" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.357904 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"881f1d78b417f196189fc804ba3e9d31dcf76fba4901b4a4010e815cbee05e8a"} err="failed to get container status \"881f1d78b417f196189fc804ba3e9d31dcf76fba4901b4a4010e815cbee05e8a\": rpc error: code = NotFound desc = could not find container \"881f1d78b417f196189fc804ba3e9d31dcf76fba4901b4a4010e815cbee05e8a\": container with ID starting with 881f1d78b417f196189fc804ba3e9d31dcf76fba4901b4a4010e815cbee05e8a not found: ID does not exist" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.734472 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" path="/var/lib/kubelet/pods/1b568186-dd5f-4340-9b0b-f083bf37a1b5/volumes" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.735683 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" path="/var/lib/kubelet/pods/2eab3b38-2b5e-4ab8-8660-a45f19b1d329/volumes" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.736857 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" path="/var/lib/kubelet/pods/70d524b5-855e-4dda-aaa8-5ae9463e7b3c/volumes" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.738971 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a0b7e05-ef31-426e-989f-a6ad6c710150" path="/var/lib/kubelet/pods/8a0b7e05-ef31-426e-989f-a6ad6c710150/volumes" Jan 20 03:54:19 crc kubenswrapper[4898]: I0120 03:54:19.739911 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" path="/var/lib/kubelet/pods/f0c2a49d-68aa-428a-87dd-fc3cddb41040/volumes" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581369 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6zngz"] Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581587 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerName="extract-utilities" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581599 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerName="extract-utilities" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581609 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerName="extract-content" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581615 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerName="extract-content" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581622 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581628 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581638 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581644 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581651 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" containerName="installer" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581657 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" containerName="installer" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581667 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581673 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581682 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerName="extract-utilities" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581688 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerName="extract-utilities" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581696 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581701 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581708 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" containerName="extract-utilities" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581724 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" containerName="extract-utilities" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581732 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" containerName="extract-utilities" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581738 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" containerName="extract-utilities" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581745 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" containerName="extract-content" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581751 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" containerName="extract-content" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581761 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581767 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581776 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerName="extract-content" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581781 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerName="extract-content" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581792 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0b7e05-ef31-426e-989f-a6ad6c710150" containerName="marketplace-operator" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581798 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0b7e05-ef31-426e-989f-a6ad6c710150" containerName="marketplace-operator" Jan 20 03:54:20 crc kubenswrapper[4898]: E0120 03:54:20.581809 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" containerName="extract-content" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581815 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" containerName="extract-content" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581893 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581903 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="9571428a-14e5-47b1-b963-13f4a5bdfaba" containerName="installer" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581914 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d524b5-855e-4dda-aaa8-5ae9463e7b3c" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581922 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eab3b38-2b5e-4ab8-8660-a45f19b1d329" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581930 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a0b7e05-ef31-426e-989f-a6ad6c710150" containerName="marketplace-operator" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581937 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c2a49d-68aa-428a-87dd-fc3cddb41040" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.581948 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b568186-dd5f-4340-9b0b-f083bf37a1b5" containerName="registry-server" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.582299 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.586982 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.586977 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.587069 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.587091 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.596750 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6zngz"] Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.619536 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.728597 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ada7bb7-b089-45d1-8314-5a3218932dfb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6zngz\" (UID: \"4ada7bb7-b089-45d1-8314-5a3218932dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.729153 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4ada7bb7-b089-45d1-8314-5a3218932dfb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6zngz\" (UID: \"4ada7bb7-b089-45d1-8314-5a3218932dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.729204 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79vnh\" (UniqueName: \"kubernetes.io/projected/4ada7bb7-b089-45d1-8314-5a3218932dfb-kube-api-access-79vnh\") pod \"marketplace-operator-79b997595-6zngz\" (UID: \"4ada7bb7-b089-45d1-8314-5a3218932dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.830071 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4ada7bb7-b089-45d1-8314-5a3218932dfb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6zngz\" (UID: \"4ada7bb7-b089-45d1-8314-5a3218932dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.830110 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79vnh\" (UniqueName: \"kubernetes.io/projected/4ada7bb7-b089-45d1-8314-5a3218932dfb-kube-api-access-79vnh\") pod \"marketplace-operator-79b997595-6zngz\" (UID: \"4ada7bb7-b089-45d1-8314-5a3218932dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.830190 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ada7bb7-b089-45d1-8314-5a3218932dfb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6zngz\" (UID: \"4ada7bb7-b089-45d1-8314-5a3218932dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.831173 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ada7bb7-b089-45d1-8314-5a3218932dfb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6zngz\" (UID: \"4ada7bb7-b089-45d1-8314-5a3218932dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.838066 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4ada7bb7-b089-45d1-8314-5a3218932dfb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6zngz\" (UID: \"4ada7bb7-b089-45d1-8314-5a3218932dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.845960 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79vnh\" (UniqueName: \"kubernetes.io/projected/4ada7bb7-b089-45d1-8314-5a3218932dfb-kube-api-access-79vnh\") pod \"marketplace-operator-79b997595-6zngz\" (UID: \"4ada7bb7-b089-45d1-8314-5a3218932dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:20 crc kubenswrapper[4898]: I0120 03:54:20.897901 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:21 crc kubenswrapper[4898]: I0120 03:54:21.114047 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6zngz"] Jan 20 03:54:21 crc kubenswrapper[4898]: W0120 03:54:21.124320 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ada7bb7_b089_45d1_8314_5a3218932dfb.slice/crio-5af3ff6fd2e429dabea28bbd8027bf61f1e751660841be5b9134386a182da4ed WatchSource:0}: Error finding container 5af3ff6fd2e429dabea28bbd8027bf61f1e751660841be5b9134386a182da4ed: Status 404 returned error can't find the container with id 5af3ff6fd2e429dabea28bbd8027bf61f1e751660841be5b9134386a182da4ed Jan 20 03:54:22 crc kubenswrapper[4898]: I0120 03:54:22.029816 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" event={"ID":"4ada7bb7-b089-45d1-8314-5a3218932dfb","Type":"ContainerStarted","Data":"7fcea7baa26a91256b652a4daba89645b97695183167fccfd4cfdb1ace54d18a"} Jan 20 03:54:22 crc kubenswrapper[4898]: I0120 03:54:22.030216 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:22 crc kubenswrapper[4898]: I0120 03:54:22.030239 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" event={"ID":"4ada7bb7-b089-45d1-8314-5a3218932dfb","Type":"ContainerStarted","Data":"5af3ff6fd2e429dabea28bbd8027bf61f1e751660841be5b9134386a182da4ed"} Jan 20 03:54:22 crc kubenswrapper[4898]: I0120 03:54:22.033046 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" Jan 20 03:54:22 crc kubenswrapper[4898]: I0120 03:54:22.046386 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6zngz" podStartSLOduration=2.046367595 podStartE2EDuration="2.046367595s" podCreationTimestamp="2026-01-20 03:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:54:22.043603351 +0000 UTC m=+308.643391210" watchObservedRunningTime="2026-01-20 03:54:22.046367595 +0000 UTC m=+308.646155454" Jan 20 03:54:40 crc kubenswrapper[4898]: I0120 03:54:40.636337 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ff5xs"] Jan 20 03:54:40 crc kubenswrapper[4898]: I0120 03:54:40.637102 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" podUID="4fa80055-6c27-434c-b6b3-166af5828101" containerName="controller-manager" containerID="cri-o://d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910" gracePeriod=30 Jan 20 03:54:40 crc kubenswrapper[4898]: I0120 03:54:40.749395 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf"] Jan 20 03:54:40 crc kubenswrapper[4898]: I0120 03:54:40.749632 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" podUID="00fedf08-d9d4-43f5-96ff-3f705c050a96" containerName="route-controller-manager" containerID="cri-o://e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30" gracePeriod=30 Jan 20 03:54:40 crc kubenswrapper[4898]: I0120 03:54:40.843508 4898 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7jzrf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 20 03:54:40 crc kubenswrapper[4898]: I0120 03:54:40.843771 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" podUID="00fedf08-d9d4-43f5-96ff-3f705c050a96" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 20 03:54:40 crc kubenswrapper[4898]: I0120 03:54:40.986405 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.044513 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.106781 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-proxy-ca-bundles\") pod \"4fa80055-6c27-434c-b6b3-166af5828101\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.106825 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa80055-6c27-434c-b6b3-166af5828101-serving-cert\") pod \"4fa80055-6c27-434c-b6b3-166af5828101\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.106852 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmp7h\" (UniqueName: \"kubernetes.io/projected/00fedf08-d9d4-43f5-96ff-3f705c050a96-kube-api-access-fmp7h\") pod \"00fedf08-d9d4-43f5-96ff-3f705c050a96\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.106875 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-client-ca\") pod \"4fa80055-6c27-434c-b6b3-166af5828101\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.106903 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00fedf08-d9d4-43f5-96ff-3f705c050a96-serving-cert\") pod \"00fedf08-d9d4-43f5-96ff-3f705c050a96\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.106919 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-client-ca\") pod \"00fedf08-d9d4-43f5-96ff-3f705c050a96\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.106938 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-config\") pod \"00fedf08-d9d4-43f5-96ff-3f705c050a96\" (UID: \"00fedf08-d9d4-43f5-96ff-3f705c050a96\") " Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.106963 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpjbp\" (UniqueName: \"kubernetes.io/projected/4fa80055-6c27-434c-b6b3-166af5828101-kube-api-access-bpjbp\") pod \"4fa80055-6c27-434c-b6b3-166af5828101\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.106984 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-config\") pod \"4fa80055-6c27-434c-b6b3-166af5828101\" (UID: \"4fa80055-6c27-434c-b6b3-166af5828101\") " Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.107692 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-client-ca" (OuterVolumeSpecName: "client-ca") pod "4fa80055-6c27-434c-b6b3-166af5828101" (UID: "4fa80055-6c27-434c-b6b3-166af5828101"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.108002 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4fa80055-6c27-434c-b6b3-166af5828101" (UID: "4fa80055-6c27-434c-b6b3-166af5828101"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.108139 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-config" (OuterVolumeSpecName: "config") pod "4fa80055-6c27-434c-b6b3-166af5828101" (UID: "4fa80055-6c27-434c-b6b3-166af5828101"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.108141 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-config" (OuterVolumeSpecName: "config") pod "00fedf08-d9d4-43f5-96ff-3f705c050a96" (UID: "00fedf08-d9d4-43f5-96ff-3f705c050a96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.110618 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-client-ca" (OuterVolumeSpecName: "client-ca") pod "00fedf08-d9d4-43f5-96ff-3f705c050a96" (UID: "00fedf08-d9d4-43f5-96ff-3f705c050a96"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.112157 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa80055-6c27-434c-b6b3-166af5828101-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4fa80055-6c27-434c-b6b3-166af5828101" (UID: "4fa80055-6c27-434c-b6b3-166af5828101"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.112459 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00fedf08-d9d4-43f5-96ff-3f705c050a96-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "00fedf08-d9d4-43f5-96ff-3f705c050a96" (UID: "00fedf08-d9d4-43f5-96ff-3f705c050a96"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.112789 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa80055-6c27-434c-b6b3-166af5828101-kube-api-access-bpjbp" (OuterVolumeSpecName: "kube-api-access-bpjbp") pod "4fa80055-6c27-434c-b6b3-166af5828101" (UID: "4fa80055-6c27-434c-b6b3-166af5828101"). InnerVolumeSpecName "kube-api-access-bpjbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.113258 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00fedf08-d9d4-43f5-96ff-3f705c050a96-kube-api-access-fmp7h" (OuterVolumeSpecName: "kube-api-access-fmp7h") pod "00fedf08-d9d4-43f5-96ff-3f705c050a96" (UID: "00fedf08-d9d4-43f5-96ff-3f705c050a96"). InnerVolumeSpecName "kube-api-access-fmp7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.140049 4898 generic.go:334] "Generic (PLEG): container finished" podID="4fa80055-6c27-434c-b6b3-166af5828101" containerID="d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910" exitCode=0 Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.140106 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.140146 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" event={"ID":"4fa80055-6c27-434c-b6b3-166af5828101","Type":"ContainerDied","Data":"d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910"} Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.140246 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ff5xs" event={"ID":"4fa80055-6c27-434c-b6b3-166af5828101","Type":"ContainerDied","Data":"2a3448d0863f91e8d2c8e2c20e8c5a8e0cd3e1cf34ab77de3635b91d07157d78"} Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.140273 4898 scope.go:117] "RemoveContainer" containerID="d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.141915 4898 generic.go:334] "Generic (PLEG): container finished" podID="00fedf08-d9d4-43f5-96ff-3f705c050a96" containerID="e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30" exitCode=0 Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.141976 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.141979 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" event={"ID":"00fedf08-d9d4-43f5-96ff-3f705c050a96","Type":"ContainerDied","Data":"e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30"} Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.142127 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf" event={"ID":"00fedf08-d9d4-43f5-96ff-3f705c050a96","Type":"ContainerDied","Data":"181ed1718dd13908fdb254b17fcae1772dfabdc04826fb1139f75a1747d98f46"} Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.161407 4898 scope.go:117] "RemoveContainer" containerID="d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910" Jan 20 03:54:41 crc kubenswrapper[4898]: E0120 03:54:41.162470 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910\": container with ID starting with d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910 not found: ID does not exist" containerID="d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.162503 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910"} err="failed to get container status \"d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910\": rpc error: code = NotFound desc = could not find container \"d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910\": container with ID starting with d66502fb71486b5ec5867700b7daab80571629377e870ec639f6b7f59d3d7910 not found: ID does not exist" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.162523 4898 scope.go:117] "RemoveContainer" containerID="e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.176348 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ff5xs"] Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.181487 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ff5xs"] Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.186174 4898 scope.go:117] "RemoveContainer" containerID="e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30" Jan 20 03:54:41 crc kubenswrapper[4898]: E0120 03:54:41.186794 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30\": container with ID starting with e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30 not found: ID does not exist" containerID="e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.186883 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30"} err="failed to get container status \"e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30\": rpc error: code = NotFound desc = could not find container \"e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30\": container with ID starting with e509bb08d1ee729b8278c93a639e350497b942d003df4e0da75e09e9116f1e30 not found: ID does not exist" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.190393 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf"] Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.193248 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7jzrf"] Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.207907 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.207929 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa80055-6c27-434c-b6b3-166af5828101-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.207940 4898 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.207951 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmp7h\" (UniqueName: \"kubernetes.io/projected/00fedf08-d9d4-43f5-96ff-3f705c050a96-kube-api-access-fmp7h\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.207962 4898 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fa80055-6c27-434c-b6b3-166af5828101-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.207970 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00fedf08-d9d4-43f5-96ff-3f705c050a96-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.207978 4898 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.207986 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00fedf08-d9d4-43f5-96ff-3f705c050a96-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.207994 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpjbp\" (UniqueName: \"kubernetes.io/projected/4fa80055-6c27-434c-b6b3-166af5828101-kube-api-access-bpjbp\") on node \"crc\" DevicePath \"\"" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.732520 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00fedf08-d9d4-43f5-96ff-3f705c050a96" path="/var/lib/kubelet/pods/00fedf08-d9d4-43f5-96ff-3f705c050a96/volumes" Jan 20 03:54:41 crc kubenswrapper[4898]: I0120 03:54:41.733478 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa80055-6c27-434c-b6b3-166af5828101" path="/var/lib/kubelet/pods/4fa80055-6c27-434c-b6b3-166af5828101/volumes" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.104138 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl"] Jan 20 03:54:42 crc kubenswrapper[4898]: E0120 03:54:42.104530 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa80055-6c27-434c-b6b3-166af5828101" containerName="controller-manager" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.104559 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa80055-6c27-434c-b6b3-166af5828101" containerName="controller-manager" Jan 20 03:54:42 crc kubenswrapper[4898]: E0120 03:54:42.104574 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00fedf08-d9d4-43f5-96ff-3f705c050a96" containerName="route-controller-manager" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.104583 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="00fedf08-d9d4-43f5-96ff-3f705c050a96" containerName="route-controller-manager" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.104726 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa80055-6c27-434c-b6b3-166af5828101" containerName="controller-manager" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.104743 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="00fedf08-d9d4-43f5-96ff-3f705c050a96" containerName="route-controller-manager" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.105299 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.107716 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.107963 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.108212 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.108506 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.108620 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.108936 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6857957965-4t59q"] Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.109665 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.110961 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.112654 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.113234 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.113257 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.113283 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.113256 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.113906 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.120842 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.129469 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl"] Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.136514 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6857957965-4t59q"] Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.219696 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-proxy-ca-bundles\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.219763 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0311261-06de-4807-a142-d0263437f3c5-serving-cert\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.219794 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7spq\" (UniqueName: \"kubernetes.io/projected/28f9db10-2a9c-497c-b32c-c603273f160c-kube-api-access-m7spq\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.219845 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-config\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.219881 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28f9db10-2a9c-497c-b32c-c603273f160c-client-ca\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.220700 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-client-ca\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.220746 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csvg9\" (UniqueName: \"kubernetes.io/projected/b0311261-06de-4807-a142-d0263437f3c5-kube-api-access-csvg9\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.220875 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28f9db10-2a9c-497c-b32c-c603273f160c-serving-cert\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.220965 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f9db10-2a9c-497c-b32c-c603273f160c-config\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.322276 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-config\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.322360 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28f9db10-2a9c-497c-b32c-c603273f160c-client-ca\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.322406 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-client-ca\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.322501 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csvg9\" (UniqueName: \"kubernetes.io/projected/b0311261-06de-4807-a142-d0263437f3c5-kube-api-access-csvg9\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.322574 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28f9db10-2a9c-497c-b32c-c603273f160c-serving-cert\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.322615 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f9db10-2a9c-497c-b32c-c603273f160c-config\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.322699 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-proxy-ca-bundles\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.322752 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0311261-06de-4807-a142-d0263437f3c5-serving-cert\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.322792 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7spq\" (UniqueName: \"kubernetes.io/projected/28f9db10-2a9c-497c-b32c-c603273f160c-kube-api-access-m7spq\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.324021 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-client-ca\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.324062 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-config\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.324839 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-proxy-ca-bundles\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.324900 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28f9db10-2a9c-497c-b32c-c603273f160c-client-ca\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.325855 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f9db10-2a9c-497c-b32c-c603273f160c-config\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.329177 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28f9db10-2a9c-497c-b32c-c603273f160c-serving-cert\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.329178 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0311261-06de-4807-a142-d0263437f3c5-serving-cert\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.343204 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7spq\" (UniqueName: \"kubernetes.io/projected/28f9db10-2a9c-497c-b32c-c603273f160c-kube-api-access-m7spq\") pod \"route-controller-manager-5f656f8f76-g74vl\" (UID: \"28f9db10-2a9c-497c-b32c-c603273f160c\") " pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.355717 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csvg9\" (UniqueName: \"kubernetes.io/projected/b0311261-06de-4807-a142-d0263437f3c5-kube-api-access-csvg9\") pod \"controller-manager-6857957965-4t59q\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.436122 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.446331 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.692247 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl"] Jan 20 03:54:42 crc kubenswrapper[4898]: I0120 03:54:42.718933 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6857957965-4t59q"] Jan 20 03:54:42 crc kubenswrapper[4898]: W0120 03:54:42.728000 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0311261_06de_4807_a142_d0263437f3c5.slice/crio-be3842c7126ca6fe3238c2efc156a805f795f16016ad5e84b2e00b2ead3265c4 WatchSource:0}: Error finding container be3842c7126ca6fe3238c2efc156a805f795f16016ad5e84b2e00b2ead3265c4: Status 404 returned error can't find the container with id be3842c7126ca6fe3238c2efc156a805f795f16016ad5e84b2e00b2ead3265c4 Jan 20 03:54:43 crc kubenswrapper[4898]: I0120 03:54:43.163710 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" event={"ID":"28f9db10-2a9c-497c-b32c-c603273f160c","Type":"ContainerStarted","Data":"ea809ac161c5504f9cf654a5fea7795f47f3151980db354cdf8e4a4e9d5cfba9"} Jan 20 03:54:43 crc kubenswrapper[4898]: I0120 03:54:43.165561 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" event={"ID":"28f9db10-2a9c-497c-b32c-c603273f160c","Type":"ContainerStarted","Data":"27cb965f2e5f021bca462be57191d4f00c16fda61a0b0b5700c905ca36660747"} Jan 20 03:54:43 crc kubenswrapper[4898]: I0120 03:54:43.165892 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:54:43 crc kubenswrapper[4898]: I0120 03:54:43.166772 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" event={"ID":"b0311261-06de-4807-a142-d0263437f3c5","Type":"ContainerStarted","Data":"0a6ceab26fcd099f46973330a9cb478a9f4b65eed66210d15e7ae1d912b040c2"} Jan 20 03:54:43 crc kubenswrapper[4898]: I0120 03:54:43.166828 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" event={"ID":"b0311261-06de-4807-a142-d0263437f3c5","Type":"ContainerStarted","Data":"be3842c7126ca6fe3238c2efc156a805f795f16016ad5e84b2e00b2ead3265c4"} Jan 20 03:54:43 crc kubenswrapper[4898]: I0120 03:54:43.167050 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:43 crc kubenswrapper[4898]: I0120 03:54:43.170864 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:54:43 crc kubenswrapper[4898]: I0120 03:54:43.179779 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" podStartSLOduration=3.179762979 podStartE2EDuration="3.179762979s" podCreationTimestamp="2026-01-20 03:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:54:43.178604204 +0000 UTC m=+329.778392063" watchObservedRunningTime="2026-01-20 03:54:43.179762979 +0000 UTC m=+329.779550838" Jan 20 03:54:43 crc kubenswrapper[4898]: I0120 03:54:43.201732 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" podStartSLOduration=3.201712398 podStartE2EDuration="3.201712398s" podCreationTimestamp="2026-01-20 03:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:54:43.200987536 +0000 UTC m=+329.800775395" watchObservedRunningTime="2026-01-20 03:54:43.201712398 +0000 UTC m=+329.801500257" Jan 20 03:54:43 crc kubenswrapper[4898]: I0120 03:54:43.215272 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f656f8f76-g74vl" Jan 20 03:55:00 crc kubenswrapper[4898]: I0120 03:55:00.672213 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6857957965-4t59q"] Jan 20 03:55:00 crc kubenswrapper[4898]: I0120 03:55:00.673621 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" podUID="b0311261-06de-4807-a142-d0263437f3c5" containerName="controller-manager" containerID="cri-o://0a6ceab26fcd099f46973330a9cb478a9f4b65eed66210d15e7ae1d912b040c2" gracePeriod=30 Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.272103 4898 generic.go:334] "Generic (PLEG): container finished" podID="b0311261-06de-4807-a142-d0263437f3c5" containerID="0a6ceab26fcd099f46973330a9cb478a9f4b65eed66210d15e7ae1d912b040c2" exitCode=0 Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.272182 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" event={"ID":"b0311261-06de-4807-a142-d0263437f3c5","Type":"ContainerDied","Data":"0a6ceab26fcd099f46973330a9cb478a9f4b65eed66210d15e7ae1d912b040c2"} Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.782600 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.787059 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0311261-06de-4807-a142-d0263437f3c5-serving-cert\") pod \"b0311261-06de-4807-a142-d0263437f3c5\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.798711 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0311261-06de-4807-a142-d0263437f3c5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b0311261-06de-4807-a142-d0263437f3c5" (UID: "b0311261-06de-4807-a142-d0263437f3c5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.812288 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68879cc6cd-82zmn"] Jan 20 03:55:01 crc kubenswrapper[4898]: E0120 03:55:01.812520 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0311261-06de-4807-a142-d0263437f3c5" containerName="controller-manager" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.812534 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0311261-06de-4807-a142-d0263437f3c5" containerName="controller-manager" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.812639 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0311261-06de-4807-a142-d0263437f3c5" containerName="controller-manager" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.812974 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.860155 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68879cc6cd-82zmn"] Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.887953 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-proxy-ca-bundles\") pod \"b0311261-06de-4807-a142-d0263437f3c5\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.888003 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csvg9\" (UniqueName: \"kubernetes.io/projected/b0311261-06de-4807-a142-d0263437f3c5-kube-api-access-csvg9\") pod \"b0311261-06de-4807-a142-d0263437f3c5\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.888033 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-client-ca\") pod \"b0311261-06de-4807-a142-d0263437f3c5\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.888087 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-config\") pod \"b0311261-06de-4807-a142-d0263437f3c5\" (UID: \"b0311261-06de-4807-a142-d0263437f3c5\") " Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.888198 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b79de1-043d-4313-b167-93447dfefdf0-config\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.888240 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52b79de1-043d-4313-b167-93447dfefdf0-client-ca\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.888620 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52b79de1-043d-4313-b167-93447dfefdf0-serving-cert\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.888675 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52b79de1-043d-4313-b167-93447dfefdf0-proxy-ca-bundles\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.888905 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnbjk\" (UniqueName: \"kubernetes.io/projected/52b79de1-043d-4313-b167-93447dfefdf0-kube-api-access-tnbjk\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.889016 4898 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0311261-06de-4807-a142-d0263437f3c5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.889013 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-client-ca" (OuterVolumeSpecName: "client-ca") pod "b0311261-06de-4807-a142-d0263437f3c5" (UID: "b0311261-06de-4807-a142-d0263437f3c5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.889043 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-config" (OuterVolumeSpecName: "config") pod "b0311261-06de-4807-a142-d0263437f3c5" (UID: "b0311261-06de-4807-a142-d0263437f3c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.889218 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b0311261-06de-4807-a142-d0263437f3c5" (UID: "b0311261-06de-4807-a142-d0263437f3c5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.891326 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0311261-06de-4807-a142-d0263437f3c5-kube-api-access-csvg9" (OuterVolumeSpecName: "kube-api-access-csvg9") pod "b0311261-06de-4807-a142-d0263437f3c5" (UID: "b0311261-06de-4807-a142-d0263437f3c5"). InnerVolumeSpecName "kube-api-access-csvg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.990596 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b79de1-043d-4313-b167-93447dfefdf0-config\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.990718 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52b79de1-043d-4313-b167-93447dfefdf0-client-ca\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.990789 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52b79de1-043d-4313-b167-93447dfefdf0-serving-cert\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.990827 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52b79de1-043d-4313-b167-93447dfefdf0-proxy-ca-bundles\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.990912 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnbjk\" (UniqueName: \"kubernetes.io/projected/52b79de1-043d-4313-b167-93447dfefdf0-kube-api-access-tnbjk\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.990981 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.991002 4898 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.991026 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csvg9\" (UniqueName: \"kubernetes.io/projected/b0311261-06de-4807-a142-d0263437f3c5-kube-api-access-csvg9\") on node \"crc\" DevicePath \"\"" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.991043 4898 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0311261-06de-4807-a142-d0263437f3c5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.992427 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52b79de1-043d-4313-b167-93447dfefdf0-client-ca\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.992842 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52b79de1-043d-4313-b167-93447dfefdf0-proxy-ca-bundles\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.994256 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52b79de1-043d-4313-b167-93447dfefdf0-serving-cert\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:01 crc kubenswrapper[4898]: I0120 03:55:01.994446 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b79de1-043d-4313-b167-93447dfefdf0-config\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:02 crc kubenswrapper[4898]: I0120 03:55:02.007777 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnbjk\" (UniqueName: \"kubernetes.io/projected/52b79de1-043d-4313-b167-93447dfefdf0-kube-api-access-tnbjk\") pod \"controller-manager-68879cc6cd-82zmn\" (UID: \"52b79de1-043d-4313-b167-93447dfefdf0\") " pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:02 crc kubenswrapper[4898]: I0120 03:55:02.135311 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:02 crc kubenswrapper[4898]: I0120 03:55:02.280744 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" event={"ID":"b0311261-06de-4807-a142-d0263437f3c5","Type":"ContainerDied","Data":"be3842c7126ca6fe3238c2efc156a805f795f16016ad5e84b2e00b2ead3265c4"} Jan 20 03:55:02 crc kubenswrapper[4898]: I0120 03:55:02.280802 4898 scope.go:117] "RemoveContainer" containerID="0a6ceab26fcd099f46973330a9cb478a9f4b65eed66210d15e7ae1d912b040c2" Jan 20 03:55:02 crc kubenswrapper[4898]: I0120 03:55:02.280916 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6857957965-4t59q" Jan 20 03:55:02 crc kubenswrapper[4898]: I0120 03:55:02.331218 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6857957965-4t59q"] Jan 20 03:55:02 crc kubenswrapper[4898]: I0120 03:55:02.336054 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6857957965-4t59q"] Jan 20 03:55:02 crc kubenswrapper[4898]: I0120 03:55:02.339904 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68879cc6cd-82zmn"] Jan 20 03:55:02 crc kubenswrapper[4898]: W0120 03:55:02.342100 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52b79de1_043d_4313_b167_93447dfefdf0.slice/crio-e8d5a79b038bd216ac47069178eedc482444711449aa834944a6ae71d84d76b7 WatchSource:0}: Error finding container e8d5a79b038bd216ac47069178eedc482444711449aa834944a6ae71d84d76b7: Status 404 returned error can't find the container with id e8d5a79b038bd216ac47069178eedc482444711449aa834944a6ae71d84d76b7 Jan 20 03:55:03 crc kubenswrapper[4898]: I0120 03:55:03.291626 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" event={"ID":"52b79de1-043d-4313-b167-93447dfefdf0","Type":"ContainerStarted","Data":"bc94177fc5c45e405ac6eced5eea0d219e9d37be194607123e9e24c8ef6b6d43"} Jan 20 03:55:03 crc kubenswrapper[4898]: I0120 03:55:03.292054 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" event={"ID":"52b79de1-043d-4313-b167-93447dfefdf0","Type":"ContainerStarted","Data":"e8d5a79b038bd216ac47069178eedc482444711449aa834944a6ae71d84d76b7"} Jan 20 03:55:03 crc kubenswrapper[4898]: I0120 03:55:03.292090 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:03 crc kubenswrapper[4898]: I0120 03:55:03.300865 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" Jan 20 03:55:03 crc kubenswrapper[4898]: I0120 03:55:03.312787 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68879cc6cd-82zmn" podStartSLOduration=3.312748036 podStartE2EDuration="3.312748036s" podCreationTimestamp="2026-01-20 03:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:55:03.31186811 +0000 UTC m=+349.911656019" watchObservedRunningTime="2026-01-20 03:55:03.312748036 +0000 UTC m=+349.912535955" Jan 20 03:55:03 crc kubenswrapper[4898]: I0120 03:55:03.728119 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0311261-06de-4807-a142-d0263437f3c5" path="/var/lib/kubelet/pods/b0311261-06de-4807-a142-d0263437f3c5/volumes" Jan 20 03:55:09 crc kubenswrapper[4898]: I0120 03:55:09.976598 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 03:55:09 crc kubenswrapper[4898]: I0120 03:55:09.977426 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.385088 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qtczp"] Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.386537 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.389948 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.402901 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qtczp"] Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.552676 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxp6c\" (UniqueName: \"kubernetes.io/projected/b8875827-2900-4d96-ae50-be27e6fe41da-kube-api-access-gxp6c\") pod \"redhat-operators-qtczp\" (UID: \"b8875827-2900-4d96-ae50-be27e6fe41da\") " pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.552868 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8875827-2900-4d96-ae50-be27e6fe41da-utilities\") pod \"redhat-operators-qtczp\" (UID: \"b8875827-2900-4d96-ae50-be27e6fe41da\") " pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.552941 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8875827-2900-4d96-ae50-be27e6fe41da-catalog-content\") pod \"redhat-operators-qtczp\" (UID: \"b8875827-2900-4d96-ae50-be27e6fe41da\") " pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.577864 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nn7q7"] Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.579026 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.582986 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.594833 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nn7q7"] Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.653783 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22d509aa-3e38-4323-87fb-ff9b23c0dd2a-utilities\") pod \"community-operators-nn7q7\" (UID: \"22d509aa-3e38-4323-87fb-ff9b23c0dd2a\") " pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.653843 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhgh6\" (UniqueName: \"kubernetes.io/projected/22d509aa-3e38-4323-87fb-ff9b23c0dd2a-kube-api-access-jhgh6\") pod \"community-operators-nn7q7\" (UID: \"22d509aa-3e38-4323-87fb-ff9b23c0dd2a\") " pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.653882 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22d509aa-3e38-4323-87fb-ff9b23c0dd2a-catalog-content\") pod \"community-operators-nn7q7\" (UID: \"22d509aa-3e38-4323-87fb-ff9b23c0dd2a\") " pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.653948 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8875827-2900-4d96-ae50-be27e6fe41da-utilities\") pod \"redhat-operators-qtczp\" (UID: \"b8875827-2900-4d96-ae50-be27e6fe41da\") " pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.653995 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8875827-2900-4d96-ae50-be27e6fe41da-catalog-content\") pod \"redhat-operators-qtczp\" (UID: \"b8875827-2900-4d96-ae50-be27e6fe41da\") " pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.654047 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxp6c\" (UniqueName: \"kubernetes.io/projected/b8875827-2900-4d96-ae50-be27e6fe41da-kube-api-access-gxp6c\") pod \"redhat-operators-qtczp\" (UID: \"b8875827-2900-4d96-ae50-be27e6fe41da\") " pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.654493 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8875827-2900-4d96-ae50-be27e6fe41da-utilities\") pod \"redhat-operators-qtczp\" (UID: \"b8875827-2900-4d96-ae50-be27e6fe41da\") " pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.654645 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8875827-2900-4d96-ae50-be27e6fe41da-catalog-content\") pod \"redhat-operators-qtczp\" (UID: \"b8875827-2900-4d96-ae50-be27e6fe41da\") " pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.677625 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxp6c\" (UniqueName: \"kubernetes.io/projected/b8875827-2900-4d96-ae50-be27e6fe41da-kube-api-access-gxp6c\") pod \"redhat-operators-qtczp\" (UID: \"b8875827-2900-4d96-ae50-be27e6fe41da\") " pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.707334 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.715114 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.755520 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22d509aa-3e38-4323-87fb-ff9b23c0dd2a-utilities\") pod \"community-operators-nn7q7\" (UID: \"22d509aa-3e38-4323-87fb-ff9b23c0dd2a\") " pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.755588 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhgh6\" (UniqueName: \"kubernetes.io/projected/22d509aa-3e38-4323-87fb-ff9b23c0dd2a-kube-api-access-jhgh6\") pod \"community-operators-nn7q7\" (UID: \"22d509aa-3e38-4323-87fb-ff9b23c0dd2a\") " pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.755654 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22d509aa-3e38-4323-87fb-ff9b23c0dd2a-catalog-content\") pod \"community-operators-nn7q7\" (UID: \"22d509aa-3e38-4323-87fb-ff9b23c0dd2a\") " pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.756091 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22d509aa-3e38-4323-87fb-ff9b23c0dd2a-utilities\") pod \"community-operators-nn7q7\" (UID: \"22d509aa-3e38-4323-87fb-ff9b23c0dd2a\") " pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.756591 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22d509aa-3e38-4323-87fb-ff9b23c0dd2a-catalog-content\") pod \"community-operators-nn7q7\" (UID: \"22d509aa-3e38-4323-87fb-ff9b23c0dd2a\") " pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.777768 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhgh6\" (UniqueName: \"kubernetes.io/projected/22d509aa-3e38-4323-87fb-ff9b23c0dd2a-kube-api-access-jhgh6\") pod \"community-operators-nn7q7\" (UID: \"22d509aa-3e38-4323-87fb-ff9b23c0dd2a\") " pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.909563 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 03:55:13 crc kubenswrapper[4898]: I0120 03:55:13.966680 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:14 crc kubenswrapper[4898]: I0120 03:55:14.121535 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qtczp"] Jan 20 03:55:14 crc kubenswrapper[4898]: W0120 03:55:14.125883 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8875827_2900_4d96_ae50_be27e6fe41da.slice/crio-ac3d75baa1864e95b3df0ff8dca8488a1b2c111e00ecca264a4a73c155fa6fe5 WatchSource:0}: Error finding container ac3d75baa1864e95b3df0ff8dca8488a1b2c111e00ecca264a4a73c155fa6fe5: Status 404 returned error can't find the container with id ac3d75baa1864e95b3df0ff8dca8488a1b2c111e00ecca264a4a73c155fa6fe5 Jan 20 03:55:14 crc kubenswrapper[4898]: I0120 03:55:14.337845 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nn7q7"] Jan 20 03:55:14 crc kubenswrapper[4898]: W0120 03:55:14.343314 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22d509aa_3e38_4323_87fb_ff9b23c0dd2a.slice/crio-a4b473ad6b1ab5c0a650ccabf42817a0f70cd28b0b0f04fcec0dd9bb31482053 WatchSource:0}: Error finding container a4b473ad6b1ab5c0a650ccabf42817a0f70cd28b0b0f04fcec0dd9bb31482053: Status 404 returned error can't find the container with id a4b473ad6b1ab5c0a650ccabf42817a0f70cd28b0b0f04fcec0dd9bb31482053 Jan 20 03:55:14 crc kubenswrapper[4898]: I0120 03:55:14.375476 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nn7q7" event={"ID":"22d509aa-3e38-4323-87fb-ff9b23c0dd2a","Type":"ContainerStarted","Data":"a4b473ad6b1ab5c0a650ccabf42817a0f70cd28b0b0f04fcec0dd9bb31482053"} Jan 20 03:55:14 crc kubenswrapper[4898]: I0120 03:55:14.377510 4898 generic.go:334] "Generic (PLEG): container finished" podID="b8875827-2900-4d96-ae50-be27e6fe41da" containerID="3904e4697454943debf896effc9a6ef3557eabba7c85552263a27f599acec2cc" exitCode=0 Jan 20 03:55:14 crc kubenswrapper[4898]: I0120 03:55:14.377583 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtczp" event={"ID":"b8875827-2900-4d96-ae50-be27e6fe41da","Type":"ContainerDied","Data":"3904e4697454943debf896effc9a6ef3557eabba7c85552263a27f599acec2cc"} Jan 20 03:55:14 crc kubenswrapper[4898]: I0120 03:55:14.377605 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtczp" event={"ID":"b8875827-2900-4d96-ae50-be27e6fe41da","Type":"ContainerStarted","Data":"ac3d75baa1864e95b3df0ff8dca8488a1b2c111e00ecca264a4a73c155fa6fe5"} Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.383772 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtczp" event={"ID":"b8875827-2900-4d96-ae50-be27e6fe41da","Type":"ContainerStarted","Data":"660ef9c2d93925924fc95c908d2c21568926298315cbe78b5ab7694cd6917b61"} Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.385308 4898 generic.go:334] "Generic (PLEG): container finished" podID="22d509aa-3e38-4323-87fb-ff9b23c0dd2a" containerID="b3a2df0be6e9742498318f1b5ef3321eacf4384ce3fff6539c672ed806d8fb02" exitCode=0 Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.385496 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nn7q7" event={"ID":"22d509aa-3e38-4323-87fb-ff9b23c0dd2a","Type":"ContainerDied","Data":"b3a2df0be6e9742498318f1b5ef3321eacf4384ce3fff6539c672ed806d8fb02"} Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.779066 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pnfgk"] Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.780257 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.782709 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.790210 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pnfgk"] Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.897662 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4464047d-a0d0-4b7c-aa1f-3553b8f0f04c-utilities\") pod \"certified-operators-pnfgk\" (UID: \"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c\") " pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.897900 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4464047d-a0d0-4b7c-aa1f-3553b8f0f04c-catalog-content\") pod \"certified-operators-pnfgk\" (UID: \"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c\") " pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.898008 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfc49\" (UniqueName: \"kubernetes.io/projected/4464047d-a0d0-4b7c-aa1f-3553b8f0f04c-kube-api-access-zfc49\") pod \"certified-operators-pnfgk\" (UID: \"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c\") " pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.976221 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4rmfj"] Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.978035 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.980741 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 03:55:15 crc kubenswrapper[4898]: I0120 03:55:15.989104 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rmfj"] Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.000948 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4464047d-a0d0-4b7c-aa1f-3553b8f0f04c-utilities\") pod \"certified-operators-pnfgk\" (UID: \"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c\") " pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.001023 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4464047d-a0d0-4b7c-aa1f-3553b8f0f04c-catalog-content\") pod \"certified-operators-pnfgk\" (UID: \"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c\") " pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.001079 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfc49\" (UniqueName: \"kubernetes.io/projected/4464047d-a0d0-4b7c-aa1f-3553b8f0f04c-kube-api-access-zfc49\") pod \"certified-operators-pnfgk\" (UID: \"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c\") " pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.002675 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4464047d-a0d0-4b7c-aa1f-3553b8f0f04c-utilities\") pod \"certified-operators-pnfgk\" (UID: \"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c\") " pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.002982 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4464047d-a0d0-4b7c-aa1f-3553b8f0f04c-catalog-content\") pod \"certified-operators-pnfgk\" (UID: \"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c\") " pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.034619 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfc49\" (UniqueName: \"kubernetes.io/projected/4464047d-a0d0-4b7c-aa1f-3553b8f0f04c-kube-api-access-zfc49\") pod \"certified-operators-pnfgk\" (UID: \"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c\") " pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.102308 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4677b629-b059-4952-b816-45484f784fec-catalog-content\") pod \"redhat-marketplace-4rmfj\" (UID: \"4677b629-b059-4952-b816-45484f784fec\") " pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.102379 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmn5b\" (UniqueName: \"kubernetes.io/projected/4677b629-b059-4952-b816-45484f784fec-kube-api-access-wmn5b\") pod \"redhat-marketplace-4rmfj\" (UID: \"4677b629-b059-4952-b816-45484f784fec\") " pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.102494 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4677b629-b059-4952-b816-45484f784fec-utilities\") pod \"redhat-marketplace-4rmfj\" (UID: \"4677b629-b059-4952-b816-45484f784fec\") " pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.140025 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.203865 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4677b629-b059-4952-b816-45484f784fec-catalog-content\") pod \"redhat-marketplace-4rmfj\" (UID: \"4677b629-b059-4952-b816-45484f784fec\") " pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.203931 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmn5b\" (UniqueName: \"kubernetes.io/projected/4677b629-b059-4952-b816-45484f784fec-kube-api-access-wmn5b\") pod \"redhat-marketplace-4rmfj\" (UID: \"4677b629-b059-4952-b816-45484f784fec\") " pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.203963 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4677b629-b059-4952-b816-45484f784fec-utilities\") pod \"redhat-marketplace-4rmfj\" (UID: \"4677b629-b059-4952-b816-45484f784fec\") " pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.204484 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4677b629-b059-4952-b816-45484f784fec-catalog-content\") pod \"redhat-marketplace-4rmfj\" (UID: \"4677b629-b059-4952-b816-45484f784fec\") " pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.204535 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4677b629-b059-4952-b816-45484f784fec-utilities\") pod \"redhat-marketplace-4rmfj\" (UID: \"4677b629-b059-4952-b816-45484f784fec\") " pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.222067 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmn5b\" (UniqueName: \"kubernetes.io/projected/4677b629-b059-4952-b816-45484f784fec-kube-api-access-wmn5b\") pod \"redhat-marketplace-4rmfj\" (UID: \"4677b629-b059-4952-b816-45484f784fec\") " pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.308825 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.395841 4898 generic.go:334] "Generic (PLEG): container finished" podID="b8875827-2900-4d96-ae50-be27e6fe41da" containerID="660ef9c2d93925924fc95c908d2c21568926298315cbe78b5ab7694cd6917b61" exitCode=0 Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.395935 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtczp" event={"ID":"b8875827-2900-4d96-ae50-be27e6fe41da","Type":"ContainerDied","Data":"660ef9c2d93925924fc95c908d2c21568926298315cbe78b5ab7694cd6917b61"} Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.401389 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nn7q7" event={"ID":"22d509aa-3e38-4323-87fb-ff9b23c0dd2a","Type":"ContainerStarted","Data":"3efc4c567e1dd14c9f5eba81e84a0c6cc8bd431bea0f110464ebbedbcda16042"} Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.564956 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pnfgk"] Jan 20 03:55:16 crc kubenswrapper[4898]: W0120 03:55:16.572903 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4464047d_a0d0_4b7c_aa1f_3553b8f0f04c.slice/crio-6c18c40dfaec32492b5e30148110b6c7e8a8040f3dacac1cffd50f7d6b53f7f9 WatchSource:0}: Error finding container 6c18c40dfaec32492b5e30148110b6c7e8a8040f3dacac1cffd50f7d6b53f7f9: Status 404 returned error can't find the container with id 6c18c40dfaec32492b5e30148110b6c7e8a8040f3dacac1cffd50f7d6b53f7f9 Jan 20 03:55:16 crc kubenswrapper[4898]: I0120 03:55:16.704784 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rmfj"] Jan 20 03:55:16 crc kubenswrapper[4898]: W0120 03:55:16.711721 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4677b629_b059_4952_b816_45484f784fec.slice/crio-db1a5abbde221ab24248a578ca97aed64f7b07022d723b940ef4c09ddf1625b0 WatchSource:0}: Error finding container db1a5abbde221ab24248a578ca97aed64f7b07022d723b940ef4c09ddf1625b0: Status 404 returned error can't find the container with id db1a5abbde221ab24248a578ca97aed64f7b07022d723b940ef4c09ddf1625b0 Jan 20 03:55:17 crc kubenswrapper[4898]: I0120 03:55:17.406887 4898 generic.go:334] "Generic (PLEG): container finished" podID="4677b629-b059-4952-b816-45484f784fec" containerID="5debc9aef22322a5c5ef7a9db8abc19ef94237954ad3c1c149fc5127a6913be0" exitCode=0 Jan 20 03:55:17 crc kubenswrapper[4898]: I0120 03:55:17.407170 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rmfj" event={"ID":"4677b629-b059-4952-b816-45484f784fec","Type":"ContainerDied","Data":"5debc9aef22322a5c5ef7a9db8abc19ef94237954ad3c1c149fc5127a6913be0"} Jan 20 03:55:17 crc kubenswrapper[4898]: I0120 03:55:17.407195 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rmfj" event={"ID":"4677b629-b059-4952-b816-45484f784fec","Type":"ContainerStarted","Data":"db1a5abbde221ab24248a578ca97aed64f7b07022d723b940ef4c09ddf1625b0"} Jan 20 03:55:17 crc kubenswrapper[4898]: I0120 03:55:17.413476 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtczp" event={"ID":"b8875827-2900-4d96-ae50-be27e6fe41da","Type":"ContainerStarted","Data":"c3909c85ee00be1a25fd52e62cde9da0b57bad1fc1f1da2a56e8d12c1c239cf1"} Jan 20 03:55:17 crc kubenswrapper[4898]: I0120 03:55:17.414683 4898 generic.go:334] "Generic (PLEG): container finished" podID="4464047d-a0d0-4b7c-aa1f-3553b8f0f04c" containerID="8b89624cc006e19c5a04cf471cf109800bbe48d7557712a1ef60bdf276a1244d" exitCode=0 Jan 20 03:55:17 crc kubenswrapper[4898]: I0120 03:55:17.414778 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnfgk" event={"ID":"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c","Type":"ContainerDied","Data":"8b89624cc006e19c5a04cf471cf109800bbe48d7557712a1ef60bdf276a1244d"} Jan 20 03:55:17 crc kubenswrapper[4898]: I0120 03:55:17.414832 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnfgk" event={"ID":"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c","Type":"ContainerStarted","Data":"6c18c40dfaec32492b5e30148110b6c7e8a8040f3dacac1cffd50f7d6b53f7f9"} Jan 20 03:55:17 crc kubenswrapper[4898]: I0120 03:55:17.416564 4898 generic.go:334] "Generic (PLEG): container finished" podID="22d509aa-3e38-4323-87fb-ff9b23c0dd2a" containerID="3efc4c567e1dd14c9f5eba81e84a0c6cc8bd431bea0f110464ebbedbcda16042" exitCode=0 Jan 20 03:55:17 crc kubenswrapper[4898]: I0120 03:55:17.416615 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nn7q7" event={"ID":"22d509aa-3e38-4323-87fb-ff9b23c0dd2a","Type":"ContainerDied","Data":"3efc4c567e1dd14c9f5eba81e84a0c6cc8bd431bea0f110464ebbedbcda16042"} Jan 20 03:55:17 crc kubenswrapper[4898]: I0120 03:55:17.485099 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qtczp" podStartSLOduration=1.895404603 podStartE2EDuration="4.485067941s" podCreationTimestamp="2026-01-20 03:55:13 +0000 UTC" firstStartedPulling="2026-01-20 03:55:14.379086425 +0000 UTC m=+360.978874304" lastFinishedPulling="2026-01-20 03:55:16.968749773 +0000 UTC m=+363.568537642" observedRunningTime="2026-01-20 03:55:17.483635607 +0000 UTC m=+364.083423466" watchObservedRunningTime="2026-01-20 03:55:17.485067941 +0000 UTC m=+364.084855800" Jan 20 03:55:18 crc kubenswrapper[4898]: I0120 03:55:18.423154 4898 generic.go:334] "Generic (PLEG): container finished" podID="4464047d-a0d0-4b7c-aa1f-3553b8f0f04c" containerID="ef2cc468e5fea008aa75eba56bcf8fe10c8a20407019aaaa6a4aaa7f1a405503" exitCode=0 Jan 20 03:55:18 crc kubenswrapper[4898]: I0120 03:55:18.423200 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnfgk" event={"ID":"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c","Type":"ContainerDied","Data":"ef2cc468e5fea008aa75eba56bcf8fe10c8a20407019aaaa6a4aaa7f1a405503"} Jan 20 03:55:18 crc kubenswrapper[4898]: I0120 03:55:18.425381 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nn7q7" event={"ID":"22d509aa-3e38-4323-87fb-ff9b23c0dd2a","Type":"ContainerStarted","Data":"977bb5298d0af733c296488411786ba4748e9ad194d44a2edd8cc20347535a0e"} Jan 20 03:55:18 crc kubenswrapper[4898]: I0120 03:55:18.428839 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rmfj" event={"ID":"4677b629-b059-4952-b816-45484f784fec","Type":"ContainerStarted","Data":"ce3c39d34f44c4bd1552ff65fb33116f3c4a06d8bd984b68a8067fbb2246f939"} Jan 20 03:55:18 crc kubenswrapper[4898]: I0120 03:55:18.460954 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nn7q7" podStartSLOduration=2.9329900540000002 podStartE2EDuration="5.460930339s" podCreationTimestamp="2026-01-20 03:55:13 +0000 UTC" firstStartedPulling="2026-01-20 03:55:15.387518482 +0000 UTC m=+361.987306341" lastFinishedPulling="2026-01-20 03:55:17.915458737 +0000 UTC m=+364.515246626" observedRunningTime="2026-01-20 03:55:18.460021062 +0000 UTC m=+365.059808921" watchObservedRunningTime="2026-01-20 03:55:18.460930339 +0000 UTC m=+365.060718198" Jan 20 03:55:19 crc kubenswrapper[4898]: I0120 03:55:19.450498 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnfgk" event={"ID":"4464047d-a0d0-4b7c-aa1f-3553b8f0f04c","Type":"ContainerStarted","Data":"25c0d13c3341f4109cd2ac2ea03d67a77a89090d2b9ea9a9162e6132c2184f90"} Jan 20 03:55:19 crc kubenswrapper[4898]: I0120 03:55:19.454075 4898 generic.go:334] "Generic (PLEG): container finished" podID="4677b629-b059-4952-b816-45484f784fec" containerID="ce3c39d34f44c4bd1552ff65fb33116f3c4a06d8bd984b68a8067fbb2246f939" exitCode=0 Jan 20 03:55:19 crc kubenswrapper[4898]: I0120 03:55:19.454280 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rmfj" event={"ID":"4677b629-b059-4952-b816-45484f784fec","Type":"ContainerDied","Data":"ce3c39d34f44c4bd1552ff65fb33116f3c4a06d8bd984b68a8067fbb2246f939"} Jan 20 03:55:19 crc kubenswrapper[4898]: I0120 03:55:19.472952 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pnfgk" podStartSLOduration=2.89821047 podStartE2EDuration="4.472936465s" podCreationTimestamp="2026-01-20 03:55:15 +0000 UTC" firstStartedPulling="2026-01-20 03:55:17.415553773 +0000 UTC m=+364.015341632" lastFinishedPulling="2026-01-20 03:55:18.990279768 +0000 UTC m=+365.590067627" observedRunningTime="2026-01-20 03:55:19.468850832 +0000 UTC m=+366.068638701" watchObservedRunningTime="2026-01-20 03:55:19.472936465 +0000 UTC m=+366.072724344" Jan 20 03:55:20 crc kubenswrapper[4898]: I0120 03:55:20.463492 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rmfj" event={"ID":"4677b629-b059-4952-b816-45484f784fec","Type":"ContainerStarted","Data":"1824d11de19cc8382579319cdda7aad14c32265a69a82e6972b81bdd83c3da53"} Jan 20 03:55:20 crc kubenswrapper[4898]: I0120 03:55:20.490827 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4rmfj" podStartSLOduration=2.915095365 podStartE2EDuration="5.490807315s" podCreationTimestamp="2026-01-20 03:55:15 +0000 UTC" firstStartedPulling="2026-01-20 03:55:17.408288464 +0000 UTC m=+364.008076323" lastFinishedPulling="2026-01-20 03:55:19.984000414 +0000 UTC m=+366.583788273" observedRunningTime="2026-01-20 03:55:20.487791495 +0000 UTC m=+367.087579364" watchObservedRunningTime="2026-01-20 03:55:20.490807315 +0000 UTC m=+367.090595184" Jan 20 03:55:23 crc kubenswrapper[4898]: I0120 03:55:23.715782 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:23 crc kubenswrapper[4898]: I0120 03:55:23.716082 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:23 crc kubenswrapper[4898]: I0120 03:55:23.799524 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:23 crc kubenswrapper[4898]: I0120 03:55:23.969424 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:23 crc kubenswrapper[4898]: I0120 03:55:23.969521 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:24 crc kubenswrapper[4898]: I0120 03:55:24.008572 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:24 crc kubenswrapper[4898]: I0120 03:55:24.521659 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qtczp" Jan 20 03:55:24 crc kubenswrapper[4898]: I0120 03:55:24.526848 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nn7q7" Jan 20 03:55:26 crc kubenswrapper[4898]: I0120 03:55:26.140536 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:26 crc kubenswrapper[4898]: I0120 03:55:26.141007 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:26 crc kubenswrapper[4898]: I0120 03:55:26.204828 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:26 crc kubenswrapper[4898]: I0120 03:55:26.310104 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:26 crc kubenswrapper[4898]: I0120 03:55:26.310193 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:26 crc kubenswrapper[4898]: I0120 03:55:26.371079 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:26 crc kubenswrapper[4898]: I0120 03:55:26.548236 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4rmfj" Jan 20 03:55:26 crc kubenswrapper[4898]: I0120 03:55:26.565877 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pnfgk" Jan 20 03:55:39 crc kubenswrapper[4898]: I0120 03:55:39.975545 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 03:55:39 crc kubenswrapper[4898]: I0120 03:55:39.976858 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 03:55:40 crc kubenswrapper[4898]: I0120 03:55:40.810882 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mjr8z"] Jan 20 03:55:40 crc kubenswrapper[4898]: I0120 03:55:40.813307 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:40 crc kubenswrapper[4898]: I0120 03:55:40.835261 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mjr8z"] Jan 20 03:55:40 crc kubenswrapper[4898]: I0120 03:55:40.952964 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/660954c7-e9df-4cdd-8527-07c5af7cd91d-bound-sa-token\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:40 crc kubenswrapper[4898]: I0120 03:55:40.953020 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/660954c7-e9df-4cdd-8527-07c5af7cd91d-trusted-ca\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:40 crc kubenswrapper[4898]: I0120 03:55:40.953045 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/660954c7-e9df-4cdd-8527-07c5af7cd91d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:40 crc kubenswrapper[4898]: I0120 03:55:40.953228 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/660954c7-e9df-4cdd-8527-07c5af7cd91d-registry-certificates\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:40 crc kubenswrapper[4898]: I0120 03:55:40.953367 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:40 crc kubenswrapper[4898]: I0120 03:55:40.953423 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/660954c7-e9df-4cdd-8527-07c5af7cd91d-registry-tls\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:40 crc kubenswrapper[4898]: I0120 03:55:40.953618 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/660954c7-e9df-4cdd-8527-07c5af7cd91d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:40 crc kubenswrapper[4898]: I0120 03:55:40.953670 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rwmg\" (UniqueName: \"kubernetes.io/projected/660954c7-e9df-4cdd-8527-07c5af7cd91d-kube-api-access-6rwmg\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.000196 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.054787 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/660954c7-e9df-4cdd-8527-07c5af7cd91d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.055084 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rwmg\" (UniqueName: \"kubernetes.io/projected/660954c7-e9df-4cdd-8527-07c5af7cd91d-kube-api-access-6rwmg\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.055115 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/660954c7-e9df-4cdd-8527-07c5af7cd91d-bound-sa-token\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.055138 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/660954c7-e9df-4cdd-8527-07c5af7cd91d-trusted-ca\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.055155 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/660954c7-e9df-4cdd-8527-07c5af7cd91d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.055189 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/660954c7-e9df-4cdd-8527-07c5af7cd91d-registry-certificates\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.055214 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/660954c7-e9df-4cdd-8527-07c5af7cd91d-registry-tls\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.056010 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/660954c7-e9df-4cdd-8527-07c5af7cd91d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.056843 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/660954c7-e9df-4cdd-8527-07c5af7cd91d-registry-certificates\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.056875 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/660954c7-e9df-4cdd-8527-07c5af7cd91d-trusted-ca\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.060223 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/660954c7-e9df-4cdd-8527-07c5af7cd91d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.061027 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/660954c7-e9df-4cdd-8527-07c5af7cd91d-registry-tls\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.073654 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/660954c7-e9df-4cdd-8527-07c5af7cd91d-bound-sa-token\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.073752 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rwmg\" (UniqueName: \"kubernetes.io/projected/660954c7-e9df-4cdd-8527-07c5af7cd91d-kube-api-access-6rwmg\") pod \"image-registry-66df7c8f76-mjr8z\" (UID: \"660954c7-e9df-4cdd-8527-07c5af7cd91d\") " pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.129051 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.510705 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mjr8z"] Jan 20 03:55:41 crc kubenswrapper[4898]: I0120 03:55:41.595387 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" event={"ID":"660954c7-e9df-4cdd-8527-07c5af7cd91d","Type":"ContainerStarted","Data":"824c43c70be02ea80d4a96718adea34595282b834325d53c22c3f9479116c2f6"} Jan 20 03:55:42 crc kubenswrapper[4898]: I0120 03:55:42.616752 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" event={"ID":"660954c7-e9df-4cdd-8527-07c5af7cd91d","Type":"ContainerStarted","Data":"73ec8c986661031585e6e7130a82f84ce204823a6f250907b156721b991d5a75"} Jan 20 03:55:42 crc kubenswrapper[4898]: I0120 03:55:42.619978 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:55:42 crc kubenswrapper[4898]: I0120 03:55:42.664394 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" podStartSLOduration=2.6643686300000002 podStartE2EDuration="2.66436863s" podCreationTimestamp="2026-01-20 03:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:55:42.657224385 +0000 UTC m=+389.257012354" watchObservedRunningTime="2026-01-20 03:55:42.66436863 +0000 UTC m=+389.264156519" Jan 20 03:56:01 crc kubenswrapper[4898]: I0120 03:56:01.138822 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mjr8z" Jan 20 03:56:01 crc kubenswrapper[4898]: I0120 03:56:01.210556 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bpvpw"] Jan 20 03:56:09 crc kubenswrapper[4898]: I0120 03:56:09.975971 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 03:56:09 crc kubenswrapper[4898]: I0120 03:56:09.976613 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 03:56:09 crc kubenswrapper[4898]: I0120 03:56:09.976678 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:56:09 crc kubenswrapper[4898]: I0120 03:56:09.977695 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8cbb7ad6d85d39ea7ff2c1068b3057e97901016363b5fcbcec2aac6f311cf2b5"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 03:56:09 crc kubenswrapper[4898]: I0120 03:56:09.977858 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://8cbb7ad6d85d39ea7ff2c1068b3057e97901016363b5fcbcec2aac6f311cf2b5" gracePeriod=600 Jan 20 03:56:10 crc kubenswrapper[4898]: I0120 03:56:10.777276 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="8cbb7ad6d85d39ea7ff2c1068b3057e97901016363b5fcbcec2aac6f311cf2b5" exitCode=0 Jan 20 03:56:10 crc kubenswrapper[4898]: I0120 03:56:10.777346 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"8cbb7ad6d85d39ea7ff2c1068b3057e97901016363b5fcbcec2aac6f311cf2b5"} Jan 20 03:56:10 crc kubenswrapper[4898]: I0120 03:56:10.777764 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"44d3f9b5ec84966828017ff7ecf7fddfe0954e43f58c20d5a24c6fd4e2708924"} Jan 20 03:56:10 crc kubenswrapper[4898]: I0120 03:56:10.777805 4898 scope.go:117] "RemoveContainer" containerID="7823d7a3ce3bbee0fded5754e6a592df21ebbdfff12b87ac96be9b56e058fb8e" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.261067 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" podUID="9941fb67-6521-471d-8034-3cb2f695ee40" containerName="registry" containerID="cri-o://ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b" gracePeriod=30 Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.692211 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.729698 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-registry-certificates\") pod \"9941fb67-6521-471d-8034-3cb2f695ee40\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.730102 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59zpb\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-kube-api-access-59zpb\") pod \"9941fb67-6521-471d-8034-3cb2f695ee40\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.730193 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-trusted-ca\") pod \"9941fb67-6521-471d-8034-3cb2f695ee40\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.730232 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-bound-sa-token\") pod \"9941fb67-6521-471d-8034-3cb2f695ee40\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.730280 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9941fb67-6521-471d-8034-3cb2f695ee40-ca-trust-extracted\") pod \"9941fb67-6521-471d-8034-3cb2f695ee40\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.730454 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9941fb67-6521-471d-8034-3cb2f695ee40" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.730501 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"9941fb67-6521-471d-8034-3cb2f695ee40\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.730548 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-registry-tls\") pod \"9941fb67-6521-471d-8034-3cb2f695ee40\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.730581 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9941fb67-6521-471d-8034-3cb2f695ee40-installation-pull-secrets\") pod \"9941fb67-6521-471d-8034-3cb2f695ee40\" (UID: \"9941fb67-6521-471d-8034-3cb2f695ee40\") " Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.730862 4898 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.732021 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9941fb67-6521-471d-8034-3cb2f695ee40" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.737094 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9941fb67-6521-471d-8034-3cb2f695ee40" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.737804 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9941fb67-6521-471d-8034-3cb2f695ee40-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9941fb67-6521-471d-8034-3cb2f695ee40" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.740080 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9941fb67-6521-471d-8034-3cb2f695ee40" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.747748 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-kube-api-access-59zpb" (OuterVolumeSpecName: "kube-api-access-59zpb") pod "9941fb67-6521-471d-8034-3cb2f695ee40" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40"). InnerVolumeSpecName "kube-api-access-59zpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.748306 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "9941fb67-6521-471d-8034-3cb2f695ee40" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.756761 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9941fb67-6521-471d-8034-3cb2f695ee40-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9941fb67-6521-471d-8034-3cb2f695ee40" (UID: "9941fb67-6521-471d-8034-3cb2f695ee40"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.832401 4898 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9941fb67-6521-471d-8034-3cb2f695ee40-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.832469 4898 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.832482 4898 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9941fb67-6521-471d-8034-3cb2f695ee40-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.832499 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59zpb\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-kube-api-access-59zpb\") on node \"crc\" DevicePath \"\"" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.832512 4898 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9941fb67-6521-471d-8034-3cb2f695ee40-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.832525 4898 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9941fb67-6521-471d-8034-3cb2f695ee40-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.892006 4898 generic.go:334] "Generic (PLEG): container finished" podID="9941fb67-6521-471d-8034-3cb2f695ee40" containerID="ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b" exitCode=0 Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.892069 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" event={"ID":"9941fb67-6521-471d-8034-3cb2f695ee40","Type":"ContainerDied","Data":"ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b"} Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.892118 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" event={"ID":"9941fb67-6521-471d-8034-3cb2f695ee40","Type":"ContainerDied","Data":"2a4087f6253cf405c4420dc7172c2bf67b180eb8eec8103a9f1dbee1f263b1e0"} Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.892149 4898 scope.go:117] "RemoveContainer" containerID="ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.892419 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bpvpw" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.920405 4898 scope.go:117] "RemoveContainer" containerID="ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b" Jan 20 03:56:26 crc kubenswrapper[4898]: E0120 03:56:26.924960 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b\": container with ID starting with ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b not found: ID does not exist" containerID="ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.925111 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b"} err="failed to get container status \"ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b\": rpc error: code = NotFound desc = could not find container \"ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b\": container with ID starting with ab8408a5033c547d2e839986ec4fee8d66bd5e6739377cd6327cef0ac2ed838b not found: ID does not exist" Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.934100 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bpvpw"] Jan 20 03:56:26 crc kubenswrapper[4898]: I0120 03:56:26.940410 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bpvpw"] Jan 20 03:56:27 crc kubenswrapper[4898]: I0120 03:56:27.734757 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9941fb67-6521-471d-8034-3cb2f695ee40" path="/var/lib/kubelet/pods/9941fb67-6521-471d-8034-3cb2f695ee40/volumes" Jan 20 03:58:39 crc kubenswrapper[4898]: I0120 03:58:39.976681 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 03:58:39 crc kubenswrapper[4898]: I0120 03:58:39.977505 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 03:59:09 crc kubenswrapper[4898]: I0120 03:59:09.976075 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 03:59:09 crc kubenswrapper[4898]: I0120 03:59:09.976825 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.076013 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-sknqc"] Jan 20 03:59:20 crc kubenswrapper[4898]: E0120 03:59:20.076780 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9941fb67-6521-471d-8034-3cb2f695ee40" containerName="registry" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.076795 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="9941fb67-6521-471d-8034-3cb2f695ee40" containerName="registry" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.076919 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="9941fb67-6521-471d-8034-3cb2f695ee40" containerName="registry" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.077395 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sknqc" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.084469 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.084469 4898 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-n5khf" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.084557 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.094171 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-sknqc"] Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.110542 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-5s8hc"] Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.118947 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-5s8hc" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.120151 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-5s8hc"] Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.123227 4898 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-bzp6x" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.135831 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h4sj\" (UniqueName: \"kubernetes.io/projected/e19b167c-d354-4ecd-b5d6-f9c233efde6a-kube-api-access-9h4sj\") pod \"cert-manager-858654f9db-5s8hc\" (UID: \"e19b167c-d354-4ecd-b5d6-f9c233efde6a\") " pod="cert-manager/cert-manager-858654f9db-5s8hc" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.135880 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59bl\" (UniqueName: \"kubernetes.io/projected/1889e338-774d-44ba-b369-1de424fa7abd-kube-api-access-h59bl\") pod \"cert-manager-cainjector-cf98fcc89-sknqc\" (UID: \"1889e338-774d-44ba-b369-1de424fa7abd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-sknqc" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.137161 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rjgkr"] Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.139747 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rjgkr" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.141276 4898 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dwx8d" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.151776 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rjgkr"] Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.237328 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h4sj\" (UniqueName: \"kubernetes.io/projected/e19b167c-d354-4ecd-b5d6-f9c233efde6a-kube-api-access-9h4sj\") pod \"cert-manager-858654f9db-5s8hc\" (UID: \"e19b167c-d354-4ecd-b5d6-f9c233efde6a\") " pod="cert-manager/cert-manager-858654f9db-5s8hc" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.237587 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h59bl\" (UniqueName: \"kubernetes.io/projected/1889e338-774d-44ba-b369-1de424fa7abd-kube-api-access-h59bl\") pod \"cert-manager-cainjector-cf98fcc89-sknqc\" (UID: \"1889e338-774d-44ba-b369-1de424fa7abd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-sknqc" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.259011 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59bl\" (UniqueName: \"kubernetes.io/projected/1889e338-774d-44ba-b369-1de424fa7abd-kube-api-access-h59bl\") pod \"cert-manager-cainjector-cf98fcc89-sknqc\" (UID: \"1889e338-774d-44ba-b369-1de424fa7abd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-sknqc" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.259060 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h4sj\" (UniqueName: \"kubernetes.io/projected/e19b167c-d354-4ecd-b5d6-f9c233efde6a-kube-api-access-9h4sj\") pod \"cert-manager-858654f9db-5s8hc\" (UID: \"e19b167c-d354-4ecd-b5d6-f9c233efde6a\") " pod="cert-manager/cert-manager-858654f9db-5s8hc" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.338422 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-659w4\" (UniqueName: \"kubernetes.io/projected/1a4689db-712b-4b11-8b22-9f81fd060ac2-kube-api-access-659w4\") pod \"cert-manager-webhook-687f57d79b-rjgkr\" (UID: \"1a4689db-712b-4b11-8b22-9f81fd060ac2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rjgkr" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.395207 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sknqc" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.439393 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-659w4\" (UniqueName: \"kubernetes.io/projected/1a4689db-712b-4b11-8b22-9f81fd060ac2-kube-api-access-659w4\") pod \"cert-manager-webhook-687f57d79b-rjgkr\" (UID: \"1a4689db-712b-4b11-8b22-9f81fd060ac2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rjgkr" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.444979 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-5s8hc" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.461604 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-659w4\" (UniqueName: \"kubernetes.io/projected/1a4689db-712b-4b11-8b22-9f81fd060ac2-kube-api-access-659w4\") pod \"cert-manager-webhook-687f57d79b-rjgkr\" (UID: \"1a4689db-712b-4b11-8b22-9f81fd060ac2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rjgkr" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.716845 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-5s8hc"] Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.726384 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.756268 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rjgkr" Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.863646 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-sknqc"] Jan 20 03:59:20 crc kubenswrapper[4898]: W0120 03:59:20.868655 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1889e338_774d_44ba_b369_1de424fa7abd.slice/crio-920e04d55096bc96e922e1efabcc8cfb41fcef72584dc520df668e35461a7746 WatchSource:0}: Error finding container 920e04d55096bc96e922e1efabcc8cfb41fcef72584dc520df668e35461a7746: Status 404 returned error can't find the container with id 920e04d55096bc96e922e1efabcc8cfb41fcef72584dc520df668e35461a7746 Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.936830 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rjgkr"] Jan 20 03:59:20 crc kubenswrapper[4898]: W0120 03:59:20.947028 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a4689db_712b_4b11_8b22_9f81fd060ac2.slice/crio-227370bb69e97bf2b0b995f27db408c1e363847cdbad8298c81b69b4a1441cd4 WatchSource:0}: Error finding container 227370bb69e97bf2b0b995f27db408c1e363847cdbad8298c81b69b4a1441cd4: Status 404 returned error can't find the container with id 227370bb69e97bf2b0b995f27db408c1e363847cdbad8298c81b69b4a1441cd4 Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.994821 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-5s8hc" event={"ID":"e19b167c-d354-4ecd-b5d6-f9c233efde6a","Type":"ContainerStarted","Data":"155ffe5e902fc45d68d6e15fbd2055c0817e8a4823c178c40d377b9c252c83a7"} Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.995769 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rjgkr" event={"ID":"1a4689db-712b-4b11-8b22-9f81fd060ac2","Type":"ContainerStarted","Data":"227370bb69e97bf2b0b995f27db408c1e363847cdbad8298c81b69b4a1441cd4"} Jan 20 03:59:20 crc kubenswrapper[4898]: I0120 03:59:20.996729 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sknqc" event={"ID":"1889e338-774d-44ba-b369-1de424fa7abd","Type":"ContainerStarted","Data":"920e04d55096bc96e922e1efabcc8cfb41fcef72584dc520df668e35461a7746"} Jan 20 03:59:24 crc kubenswrapper[4898]: I0120 03:59:24.015529 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sknqc" event={"ID":"1889e338-774d-44ba-b369-1de424fa7abd","Type":"ContainerStarted","Data":"098ec96b560c4c42b9aaadf2956a657dd334124e13c9dbb39e1969a74c86e268"} Jan 20 03:59:24 crc kubenswrapper[4898]: I0120 03:59:24.034495 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-sknqc" podStartSLOduration=1.46197146 podStartE2EDuration="4.03446837s" podCreationTimestamp="2026-01-20 03:59:20 +0000 UTC" firstStartedPulling="2026-01-20 03:59:20.871875171 +0000 UTC m=+607.471663030" lastFinishedPulling="2026-01-20 03:59:23.444372041 +0000 UTC m=+610.044159940" observedRunningTime="2026-01-20 03:59:24.031615511 +0000 UTC m=+610.631403370" watchObservedRunningTime="2026-01-20 03:59:24.03446837 +0000 UTC m=+610.634256229" Jan 20 03:59:25 crc kubenswrapper[4898]: I0120 03:59:25.023926 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-5s8hc" event={"ID":"e19b167c-d354-4ecd-b5d6-f9c233efde6a","Type":"ContainerStarted","Data":"0e8857a182fe895be350dcfa09ffbbb4c67620213f1db87aafef4c246abe9014"} Jan 20 03:59:25 crc kubenswrapper[4898]: I0120 03:59:25.026042 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rjgkr" event={"ID":"1a4689db-712b-4b11-8b22-9f81fd060ac2","Type":"ContainerStarted","Data":"09600fa731d897cb4e22755fcf0e7c496b7e6131b5e812135aaefd3f9261fcfc"} Jan 20 03:59:25 crc kubenswrapper[4898]: I0120 03:59:25.026169 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-rjgkr" Jan 20 03:59:25 crc kubenswrapper[4898]: I0120 03:59:25.048565 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-5s8hc" podStartSLOduration=1.50259392 podStartE2EDuration="5.04854269s" podCreationTimestamp="2026-01-20 03:59:20 +0000 UTC" firstStartedPulling="2026-01-20 03:59:20.726180318 +0000 UTC m=+607.325968177" lastFinishedPulling="2026-01-20 03:59:24.272129088 +0000 UTC m=+610.871916947" observedRunningTime="2026-01-20 03:59:25.045221117 +0000 UTC m=+611.645009016" watchObservedRunningTime="2026-01-20 03:59:25.04854269 +0000 UTC m=+611.648330589" Jan 20 03:59:25 crc kubenswrapper[4898]: I0120 03:59:25.072001 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-rjgkr" podStartSLOduration=1.679187723 podStartE2EDuration="5.071978397s" podCreationTimestamp="2026-01-20 03:59:20 +0000 UTC" firstStartedPulling="2026-01-20 03:59:20.949237813 +0000 UTC m=+607.549025672" lastFinishedPulling="2026-01-20 03:59:24.342028487 +0000 UTC m=+610.941816346" observedRunningTime="2026-01-20 03:59:25.065070073 +0000 UTC m=+611.664857972" watchObservedRunningTime="2026-01-20 03:59:25.071978397 +0000 UTC m=+611.671766296" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.213617 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hzxwz"] Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.214604 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovn-controller" containerID="cri-o://39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab" gracePeriod=30 Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.214789 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="kube-rbac-proxy-node" containerID="cri-o://9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19" gracePeriod=30 Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.214740 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="nbdb" containerID="cri-o://e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c" gracePeriod=30 Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.214782 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f" gracePeriod=30 Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.214874 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovn-acl-logging" containerID="cri-o://a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce" gracePeriod=30 Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.214854 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="northd" containerID="cri-o://22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b" gracePeriod=30 Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.215145 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="sbdb" containerID="cri-o://582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2" gracePeriod=30 Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.262378 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" containerID="cri-o://6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d" gracePeriod=30 Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.603274 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/3.log" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.606277 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovn-acl-logging/0.log" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.606734 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovn-controller/0.log" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.607349 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.662498 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t5865"] Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.662817 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.662846 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.662862 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="kubecfg-setup" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.662876 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="kubecfg-setup" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.663004 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663020 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.663039 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="nbdb" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663052 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="nbdb" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.663077 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovn-acl-logging" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663091 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovn-acl-logging" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.663111 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="sbdb" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663125 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="sbdb" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.663142 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663155 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.663170 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovn-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663184 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovn-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.663208 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663221 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.663236 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="kube-rbac-proxy-node" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663248 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="kube-rbac-proxy-node" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.663267 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="northd" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663280 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="northd" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663479 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="kube-rbac-proxy-node" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663502 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="nbdb" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663521 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="northd" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663542 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663559 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="sbdb" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663581 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663594 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663610 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663628 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovn-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663647 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovn-acl-logging" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663664 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.663838 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.663854 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.664073 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: E0120 03:59:30.664271 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.664297 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerName="ovnkube-controller" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.666421 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675469 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/494c5c21-4a7a-4656-9281-a70e76f68f87-ovnkube-script-lib\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675509 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-node-log\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675532 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-var-lib-openvswitch\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675555 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-etc-openvswitch\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675573 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/494c5c21-4a7a-4656-9281-a70e76f68f87-env-overrides\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675589 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/494c5c21-4a7a-4656-9281-a70e76f68f87-ovn-node-metrics-cert\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675715 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-kubelet\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675766 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-run-openvswitch\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675800 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-run-systemd\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675825 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-cni-bin\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675853 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675878 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbck\" (UniqueName: \"kubernetes.io/projected/494c5c21-4a7a-4656-9281-a70e76f68f87-kube-api-access-sfbck\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675918 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/494c5c21-4a7a-4656-9281-a70e76f68f87-ovnkube-config\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.675994 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-slash\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.676043 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-run-ovn\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.676062 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-log-socket\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.676097 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-cni-netd\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.676124 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-systemd-units\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.676143 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-run-netns\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.676158 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-run-ovn-kubernetes\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.758797 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-rjgkr" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777288 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g868\" (UniqueName: \"kubernetes.io/projected/91759377-eaa1-4bcf-99f3-bad12cd513c2-kube-api-access-9g868\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777355 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-script-lib\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777393 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-var-lib-openvswitch\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777479 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-config\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777521 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-bin\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777562 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-systemd-units\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777612 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777648 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-node-log\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777690 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-ovn\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777727 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovn-node-metrics-cert\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777758 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-netd\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777789 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-openvswitch\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777830 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-ovn-kubernetes\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777877 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-env-overrides\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777910 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-kubelet\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777941 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-log-socket\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.777994 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-slash\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.778064 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-etc-openvswitch\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.778124 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-netns\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.778190 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-systemd\") pod \"91759377-eaa1-4bcf-99f3-bad12cd513c2\" (UID: \"91759377-eaa1-4bcf-99f3-bad12cd513c2\") " Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.778394 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-kubelet\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.778727 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-run-openvswitch\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.778820 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-run-systemd\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.778858 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-cni-bin\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.778937 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.778965 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.778990 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779023 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfbck\" (UniqueName: \"kubernetes.io/projected/494c5c21-4a7a-4656-9281-a70e76f68f87-kube-api-access-sfbck\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.778970 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779007 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779105 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779157 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-node-log" (OuterVolumeSpecName: "node-log") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779184 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779215 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779506 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-log-socket" (OuterVolumeSpecName: "log-socket") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779518 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779544 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-slash" (OuterVolumeSpecName: "host-slash") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779672 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779618 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779706 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779714 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779871 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779907 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779895 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-cni-bin\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779908 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779941 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-run-openvswitch\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779817 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-run-systemd\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.780050 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/494c5c21-4a7a-4656-9281-a70e76f68f87-ovnkube-config\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.780872 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-slash\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.780997 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-run-ovn\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.781171 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-log-socket\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.781275 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-cni-netd\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.781323 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-cni-netd\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.780965 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-slash\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.781278 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-log-socket\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.779876 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-kubelet\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.781359 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/494c5c21-4a7a-4656-9281-a70e76f68f87-ovnkube-config\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.781143 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-run-ovn\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.781736 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-systemd-units\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782058 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-run-netns\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782141 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-run-ovn-kubernetes\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782028 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-systemd-units\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782241 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-run-netns\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782269 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-host-run-ovn-kubernetes\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782326 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/494c5c21-4a7a-4656-9281-a70e76f68f87-ovnkube-script-lib\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782400 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-node-log\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782496 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-var-lib-openvswitch\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782591 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-etc-openvswitch\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782661 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/494c5c21-4a7a-4656-9281-a70e76f68f87-env-overrides\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782730 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/494c5c21-4a7a-4656-9281-a70e76f68f87-ovn-node-metrics-cert\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782866 4898 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782927 4898 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782980 4898 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783036 4898 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783084 4898 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783142 4898 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-log-socket\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783208 4898 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-slash\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783264 4898 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783313 4898 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783371 4898 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783441 4898 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783509 4898 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783568 4898 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783623 4898 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783674 4898 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783728 4898 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-node-log\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783784 4898 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783784 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/494c5c21-4a7a-4656-9281-a70e76f68f87-env-overrides\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.782876 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/494c5c21-4a7a-4656-9281-a70e76f68f87-ovnkube-script-lib\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783376 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-var-lib-openvswitch\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783330 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-node-log\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.783380 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494c5c21-4a7a-4656-9281-a70e76f68f87-etc-openvswitch\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.786882 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91759377-eaa1-4bcf-99f3-bad12cd513c2-kube-api-access-9g868" (OuterVolumeSpecName: "kube-api-access-9g868") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "kube-api-access-9g868". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.787226 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/494c5c21-4a7a-4656-9281-a70e76f68f87-ovn-node-metrics-cert\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.788606 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.803757 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "91759377-eaa1-4bcf-99f3-bad12cd513c2" (UID: "91759377-eaa1-4bcf-99f3-bad12cd513c2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.803817 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfbck\" (UniqueName: \"kubernetes.io/projected/494c5c21-4a7a-4656-9281-a70e76f68f87-kube-api-access-sfbck\") pod \"ovnkube-node-t5865\" (UID: \"494c5c21-4a7a-4656-9281-a70e76f68f87\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.885500 4898 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91759377-eaa1-4bcf-99f3-bad12cd513c2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.885728 4898 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91759377-eaa1-4bcf-99f3-bad12cd513c2-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.885864 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g868\" (UniqueName: \"kubernetes.io/projected/91759377-eaa1-4bcf-99f3-bad12cd513c2-kube-api-access-9g868\") on node \"crc\" DevicePath \"\"" Jan 20 03:59:30 crc kubenswrapper[4898]: I0120 03:59:30.985548 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:31 crc kubenswrapper[4898]: W0120 03:59:31.017586 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod494c5c21_4a7a_4656_9281_a70e76f68f87.slice/crio-f30046abd590c84c7f8947a8d78a0ccde2f4b404859e00c59886b8483fb41b3e WatchSource:0}: Error finding container f30046abd590c84c7f8947a8d78a0ccde2f4b404859e00c59886b8483fb41b3e: Status 404 returned error can't find the container with id f30046abd590c84c7f8947a8d78a0ccde2f4b404859e00c59886b8483fb41b3e Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.071270 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" event={"ID":"494c5c21-4a7a-4656-9281-a70e76f68f87","Type":"ContainerStarted","Data":"f30046abd590c84c7f8947a8d78a0ccde2f4b404859e00c59886b8483fb41b3e"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.076576 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovnkube-controller/3.log" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.099142 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovn-acl-logging/0.log" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.102317 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzxwz_91759377-eaa1-4bcf-99f3-bad12cd513c2/ovn-controller/0.log" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103109 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d" exitCode=0 Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103165 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2" exitCode=0 Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103183 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c" exitCode=0 Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103200 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b" exitCode=0 Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103180 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103275 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103329 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103216 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f" exitCode=0 Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103369 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19" exitCode=0 Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103343 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103394 4898 scope.go:117] "RemoveContainer" containerID="6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103353 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103387 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce" exitCode=143 Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103552 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103587 4898 generic.go:334] "Generic (PLEG): container finished" podID="91759377-eaa1-4bcf-99f3-bad12cd513c2" containerID="39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab" exitCode=143 Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103606 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103720 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103795 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103813 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103867 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103888 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103903 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103918 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103976 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.103991 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104016 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104088 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104107 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104122 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104138 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104153 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104167 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104180 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104194 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104210 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104267 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104295 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104364 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104386 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104401 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104497 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104520 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104536 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104593 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104613 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104628 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104643 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104707 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzxwz" event={"ID":"91759377-eaa1-4bcf-99f3-bad12cd513c2","Type":"ContainerDied","Data":"a6d470bab6bd1cc289aa8b22013ee39bba817c177737717bf4bffb1be73faa9c"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104736 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104753 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104768 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104783 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104794 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104804 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104815 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104826 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104837 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.104847 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.108278 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-897rl_1288aab6-09fa-40a3-8ff8-e00002a32d61/kube-multus/2.log" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.109349 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-897rl_1288aab6-09fa-40a3-8ff8-e00002a32d61/kube-multus/1.log" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.109522 4898 generic.go:334] "Generic (PLEG): container finished" podID="1288aab6-09fa-40a3-8ff8-e00002a32d61" containerID="9336477b9ee7e461f6c87e45d05e59a86b8d817b5945c3fa18c56fe5734ab967" exitCode=2 Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.109584 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-897rl" event={"ID":"1288aab6-09fa-40a3-8ff8-e00002a32d61","Type":"ContainerDied","Data":"9336477b9ee7e461f6c87e45d05e59a86b8d817b5945c3fa18c56fe5734ab967"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.109628 4898 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b"} Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.110540 4898 scope.go:117] "RemoveContainer" containerID="9336477b9ee7e461f6c87e45d05e59a86b8d817b5945c3fa18c56fe5734ab967" Jan 20 03:59:31 crc kubenswrapper[4898]: E0120 03:59:31.111069 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-897rl_openshift-multus(1288aab6-09fa-40a3-8ff8-e00002a32d61)\"" pod="openshift-multus/multus-897rl" podUID="1288aab6-09fa-40a3-8ff8-e00002a32d61" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.155895 4898 scope.go:117] "RemoveContainer" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.178330 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hzxwz"] Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.186182 4898 scope.go:117] "RemoveContainer" containerID="582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.187142 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hzxwz"] Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.210751 4898 scope.go:117] "RemoveContainer" containerID="e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.278808 4898 scope.go:117] "RemoveContainer" containerID="22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.295713 4898 scope.go:117] "RemoveContainer" containerID="0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.318073 4898 scope.go:117] "RemoveContainer" containerID="9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.335764 4898 scope.go:117] "RemoveContainer" containerID="a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.393588 4898 scope.go:117] "RemoveContainer" containerID="39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.413332 4898 scope.go:117] "RemoveContainer" containerID="eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.434591 4898 scope.go:117] "RemoveContainer" containerID="6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d" Jan 20 03:59:31 crc kubenswrapper[4898]: E0120 03:59:31.434903 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d\": container with ID starting with 6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d not found: ID does not exist" containerID="6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.434963 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d"} err="failed to get container status \"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d\": rpc error: code = NotFound desc = could not find container \"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d\": container with ID starting with 6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.434993 4898 scope.go:117] "RemoveContainer" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:59:31 crc kubenswrapper[4898]: E0120 03:59:31.435303 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\": container with ID starting with b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e not found: ID does not exist" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.435360 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e"} err="failed to get container status \"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\": rpc error: code = NotFound desc = could not find container \"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\": container with ID starting with b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.435400 4898 scope.go:117] "RemoveContainer" containerID="582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2" Jan 20 03:59:31 crc kubenswrapper[4898]: E0120 03:59:31.435906 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\": container with ID starting with 582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2 not found: ID does not exist" containerID="582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.435941 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2"} err="failed to get container status \"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\": rpc error: code = NotFound desc = could not find container \"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\": container with ID starting with 582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.435961 4898 scope.go:117] "RemoveContainer" containerID="e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c" Jan 20 03:59:31 crc kubenswrapper[4898]: E0120 03:59:31.436282 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\": container with ID starting with e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c not found: ID does not exist" containerID="e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.436326 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c"} err="failed to get container status \"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\": rpc error: code = NotFound desc = could not find container \"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\": container with ID starting with e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.436353 4898 scope.go:117] "RemoveContainer" containerID="22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b" Jan 20 03:59:31 crc kubenswrapper[4898]: E0120 03:59:31.436835 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\": container with ID starting with 22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b not found: ID does not exist" containerID="22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.436885 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b"} err="failed to get container status \"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\": rpc error: code = NotFound desc = could not find container \"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\": container with ID starting with 22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.436910 4898 scope.go:117] "RemoveContainer" containerID="0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f" Jan 20 03:59:31 crc kubenswrapper[4898]: E0120 03:59:31.437247 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\": container with ID starting with 0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f not found: ID does not exist" containerID="0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.437291 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f"} err="failed to get container status \"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\": rpc error: code = NotFound desc = could not find container \"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\": container with ID starting with 0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.437321 4898 scope.go:117] "RemoveContainer" containerID="9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19" Jan 20 03:59:31 crc kubenswrapper[4898]: E0120 03:59:31.437858 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\": container with ID starting with 9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19 not found: ID does not exist" containerID="9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.437899 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19"} err="failed to get container status \"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\": rpc error: code = NotFound desc = could not find container \"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\": container with ID starting with 9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.437944 4898 scope.go:117] "RemoveContainer" containerID="a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce" Jan 20 03:59:31 crc kubenswrapper[4898]: E0120 03:59:31.438260 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\": container with ID starting with a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce not found: ID does not exist" containerID="a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.438295 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce"} err="failed to get container status \"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\": rpc error: code = NotFound desc = could not find container \"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\": container with ID starting with a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.438320 4898 scope.go:117] "RemoveContainer" containerID="39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab" Jan 20 03:59:31 crc kubenswrapper[4898]: E0120 03:59:31.438692 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\": container with ID starting with 39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab not found: ID does not exist" containerID="39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.438736 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab"} err="failed to get container status \"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\": rpc error: code = NotFound desc = could not find container \"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\": container with ID starting with 39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.438764 4898 scope.go:117] "RemoveContainer" containerID="eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630" Jan 20 03:59:31 crc kubenswrapper[4898]: E0120 03:59:31.439219 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\": container with ID starting with eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630 not found: ID does not exist" containerID="eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.439254 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630"} err="failed to get container status \"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\": rpc error: code = NotFound desc = could not find container \"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\": container with ID starting with eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.439276 4898 scope.go:117] "RemoveContainer" containerID="6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.439839 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d"} err="failed to get container status \"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d\": rpc error: code = NotFound desc = could not find container \"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d\": container with ID starting with 6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.439909 4898 scope.go:117] "RemoveContainer" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.440363 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e"} err="failed to get container status \"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\": rpc error: code = NotFound desc = could not find container \"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\": container with ID starting with b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.440498 4898 scope.go:117] "RemoveContainer" containerID="582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.440798 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2"} err="failed to get container status \"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\": rpc error: code = NotFound desc = could not find container \"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\": container with ID starting with 582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.440825 4898 scope.go:117] "RemoveContainer" containerID="e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.441163 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c"} err="failed to get container status \"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\": rpc error: code = NotFound desc = could not find container \"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\": container with ID starting with e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.441191 4898 scope.go:117] "RemoveContainer" containerID="22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.441388 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b"} err="failed to get container status \"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\": rpc error: code = NotFound desc = could not find container \"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\": container with ID starting with 22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.441413 4898 scope.go:117] "RemoveContainer" containerID="0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.441629 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f"} err="failed to get container status \"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\": rpc error: code = NotFound desc = could not find container \"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\": container with ID starting with 0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.441654 4898 scope.go:117] "RemoveContainer" containerID="9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.441981 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19"} err="failed to get container status \"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\": rpc error: code = NotFound desc = could not find container \"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\": container with ID starting with 9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.442010 4898 scope.go:117] "RemoveContainer" containerID="a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.442202 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce"} err="failed to get container status \"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\": rpc error: code = NotFound desc = could not find container \"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\": container with ID starting with a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.442231 4898 scope.go:117] "RemoveContainer" containerID="39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.442422 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab"} err="failed to get container status \"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\": rpc error: code = NotFound desc = could not find container \"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\": container with ID starting with 39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.442472 4898 scope.go:117] "RemoveContainer" containerID="eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.442652 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630"} err="failed to get container status \"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\": rpc error: code = NotFound desc = could not find container \"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\": container with ID starting with eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.442677 4898 scope.go:117] "RemoveContainer" containerID="6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.442884 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d"} err="failed to get container status \"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d\": rpc error: code = NotFound desc = could not find container \"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d\": container with ID starting with 6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.442918 4898 scope.go:117] "RemoveContainer" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.443153 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e"} err="failed to get container status \"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\": rpc error: code = NotFound desc = could not find container \"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\": container with ID starting with b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.443188 4898 scope.go:117] "RemoveContainer" containerID="582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.443387 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2"} err="failed to get container status \"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\": rpc error: code = NotFound desc = could not find container \"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\": container with ID starting with 582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.443451 4898 scope.go:117] "RemoveContainer" containerID="e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.443628 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c"} err="failed to get container status \"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\": rpc error: code = NotFound desc = could not find container \"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\": container with ID starting with e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.443652 4898 scope.go:117] "RemoveContainer" containerID="22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.444032 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b"} err="failed to get container status \"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\": rpc error: code = NotFound desc = could not find container \"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\": container with ID starting with 22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.444073 4898 scope.go:117] "RemoveContainer" containerID="0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.444324 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f"} err="failed to get container status \"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\": rpc error: code = NotFound desc = could not find container \"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\": container with ID starting with 0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.444361 4898 scope.go:117] "RemoveContainer" containerID="9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.444962 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19"} err="failed to get container status \"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\": rpc error: code = NotFound desc = could not find container \"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\": container with ID starting with 9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.444989 4898 scope.go:117] "RemoveContainer" containerID="a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.445452 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce"} err="failed to get container status \"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\": rpc error: code = NotFound desc = could not find container \"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\": container with ID starting with a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.445569 4898 scope.go:117] "RemoveContainer" containerID="39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.445977 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab"} err="failed to get container status \"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\": rpc error: code = NotFound desc = could not find container \"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\": container with ID starting with 39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.446007 4898 scope.go:117] "RemoveContainer" containerID="eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.446374 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630"} err="failed to get container status \"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\": rpc error: code = NotFound desc = could not find container \"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\": container with ID starting with eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.446408 4898 scope.go:117] "RemoveContainer" containerID="6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.446775 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d"} err="failed to get container status \"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d\": rpc error: code = NotFound desc = could not find container \"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d\": container with ID starting with 6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.446803 4898 scope.go:117] "RemoveContainer" containerID="b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.447156 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e"} err="failed to get container status \"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\": rpc error: code = NotFound desc = could not find container \"b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e\": container with ID starting with b217503f540ebbe9882ed0bdc3b3bb9da495996b5e1e81955ffcc9eb76c11c5e not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.447206 4898 scope.go:117] "RemoveContainer" containerID="582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.447560 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2"} err="failed to get container status \"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\": rpc error: code = NotFound desc = could not find container \"582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2\": container with ID starting with 582af87bb349f6674b6fde36c85e214a6b6ccb5bd67f1c8ae492e6f5b4d4e0c2 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.447606 4898 scope.go:117] "RemoveContainer" containerID="e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.448068 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c"} err="failed to get container status \"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\": rpc error: code = NotFound desc = could not find container \"e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c\": container with ID starting with e47371798e9b9efe22aa71e04d2a09cbabd6b622a538627efeb0ec6013a37e6c not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.448104 4898 scope.go:117] "RemoveContainer" containerID="22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.448405 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b"} err="failed to get container status \"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\": rpc error: code = NotFound desc = could not find container \"22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b\": container with ID starting with 22824d52c366275c2dfe1983745e3338ccf38756930755b34bc32ed07a011a3b not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.448451 4898 scope.go:117] "RemoveContainer" containerID="0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.448715 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f"} err="failed to get container status \"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\": rpc error: code = NotFound desc = could not find container \"0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f\": container with ID starting with 0f2137d8b649b5af3764466f24a31c7d5fd09d20a730ea2b292a9d3119414b3f not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.448752 4898 scope.go:117] "RemoveContainer" containerID="9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.449178 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19"} err="failed to get container status \"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\": rpc error: code = NotFound desc = could not find container \"9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19\": container with ID starting with 9b9fea8c8d6bedb50bbab313ad8bef2779841d83ce08388a54ddcb6fe732ca19 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.449217 4898 scope.go:117] "RemoveContainer" containerID="a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.449481 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce"} err="failed to get container status \"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\": rpc error: code = NotFound desc = could not find container \"a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce\": container with ID starting with a210fd885cc405b40ab64b4260b5fe4cacbf05336655963ba430e7067a3de5ce not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.449519 4898 scope.go:117] "RemoveContainer" containerID="39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.449959 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab"} err="failed to get container status \"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\": rpc error: code = NotFound desc = could not find container \"39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab\": container with ID starting with 39875336b48eb77ff06c86f03dce15a2392b1f4845f08386795bb4d4452793ab not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.449988 4898 scope.go:117] "RemoveContainer" containerID="eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.450275 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630"} err="failed to get container status \"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\": rpc error: code = NotFound desc = could not find container \"eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630\": container with ID starting with eaa3268942ce5826ff028bd5a516bb2df7d25fee9d2f516b8b80b015076e5630 not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.450320 4898 scope.go:117] "RemoveContainer" containerID="6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.450902 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d"} err="failed to get container status \"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d\": rpc error: code = NotFound desc = could not find container \"6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d\": container with ID starting with 6bfb0a7a589f31233984e4dbf1a7620d68d36c46500aed7e63e86a7e784df91d not found: ID does not exist" Jan 20 03:59:31 crc kubenswrapper[4898]: I0120 03:59:31.731537 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91759377-eaa1-4bcf-99f3-bad12cd513c2" path="/var/lib/kubelet/pods/91759377-eaa1-4bcf-99f3-bad12cd513c2/volumes" Jan 20 03:59:32 crc kubenswrapper[4898]: I0120 03:59:32.118168 4898 generic.go:334] "Generic (PLEG): container finished" podID="494c5c21-4a7a-4656-9281-a70e76f68f87" containerID="17cb265557efb8b3cca64506f61852d9f95433b236a4e0ea0713b1464a28bc94" exitCode=0 Jan 20 03:59:32 crc kubenswrapper[4898]: I0120 03:59:32.118251 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" event={"ID":"494c5c21-4a7a-4656-9281-a70e76f68f87","Type":"ContainerDied","Data":"17cb265557efb8b3cca64506f61852d9f95433b236a4e0ea0713b1464a28bc94"} Jan 20 03:59:33 crc kubenswrapper[4898]: I0120 03:59:33.135649 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" event={"ID":"494c5c21-4a7a-4656-9281-a70e76f68f87","Type":"ContainerStarted","Data":"5ea709f4c74e62d89934a4c2be2368041735ba9abdc21c6e3599007caec4bd6d"} Jan 20 03:59:33 crc kubenswrapper[4898]: I0120 03:59:33.136104 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" event={"ID":"494c5c21-4a7a-4656-9281-a70e76f68f87","Type":"ContainerStarted","Data":"120abf10cd4d000cc2ed1d70884c7379092e8b9b9d1f105860946fbbb7ba682d"} Jan 20 03:59:33 crc kubenswrapper[4898]: I0120 03:59:33.136114 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" event={"ID":"494c5c21-4a7a-4656-9281-a70e76f68f87","Type":"ContainerStarted","Data":"d6fa3bd123772d3f3020d4d8361515316c01a44dd752c90f701c4431e4138c36"} Jan 20 03:59:33 crc kubenswrapper[4898]: I0120 03:59:33.136126 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" event={"ID":"494c5c21-4a7a-4656-9281-a70e76f68f87","Type":"ContainerStarted","Data":"4459a9c86f3be9e6ca12be0f609069ac5f2442c20ad94869ca8ed20eea96a4f5"} Jan 20 03:59:33 crc kubenswrapper[4898]: I0120 03:59:33.136134 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" event={"ID":"494c5c21-4a7a-4656-9281-a70e76f68f87","Type":"ContainerStarted","Data":"a7ef78ebe18ff65d91871a941ca3e5d2128802bcaed45a421952b1a4e9d2a83d"} Jan 20 03:59:33 crc kubenswrapper[4898]: I0120 03:59:33.136143 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" event={"ID":"494c5c21-4a7a-4656-9281-a70e76f68f87","Type":"ContainerStarted","Data":"c35799952bd9874a0f7dc677c87d6101f1857f5aabdd377dc616eedbee7b81a3"} Jan 20 03:59:36 crc kubenswrapper[4898]: I0120 03:59:36.171366 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" event={"ID":"494c5c21-4a7a-4656-9281-a70e76f68f87","Type":"ContainerStarted","Data":"300cffdc23f60a4652d52d2ca461a80e96a44473c287fb61c910c6299af62639"} Jan 20 03:59:38 crc kubenswrapper[4898]: I0120 03:59:38.185734 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" event={"ID":"494c5c21-4a7a-4656-9281-a70e76f68f87","Type":"ContainerStarted","Data":"a1427086c805ef9910d99ace5cf036d64c4891e68d33ce138f2cf2b3890a2125"} Jan 20 03:59:38 crc kubenswrapper[4898]: I0120 03:59:38.186074 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:38 crc kubenswrapper[4898]: I0120 03:59:38.186088 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:38 crc kubenswrapper[4898]: I0120 03:59:38.186098 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:38 crc kubenswrapper[4898]: I0120 03:59:38.210362 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:38 crc kubenswrapper[4898]: I0120 03:59:38.210608 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 03:59:38 crc kubenswrapper[4898]: I0120 03:59:38.223079 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" podStartSLOduration=8.223069728 podStartE2EDuration="8.223069728s" podCreationTimestamp="2026-01-20 03:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:59:38.220302102 +0000 UTC m=+624.820089961" watchObservedRunningTime="2026-01-20 03:59:38.223069728 +0000 UTC m=+624.822857587" Jan 20 03:59:39 crc kubenswrapper[4898]: I0120 03:59:39.975931 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 03:59:39 crc kubenswrapper[4898]: I0120 03:59:39.976340 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 03:59:39 crc kubenswrapper[4898]: I0120 03:59:39.976409 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 03:59:39 crc kubenswrapper[4898]: I0120 03:59:39.977925 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44d3f9b5ec84966828017ff7ecf7fddfe0954e43f58c20d5a24c6fd4e2708924"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 03:59:39 crc kubenswrapper[4898]: I0120 03:59:39.978047 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://44d3f9b5ec84966828017ff7ecf7fddfe0954e43f58c20d5a24c6fd4e2708924" gracePeriod=600 Jan 20 03:59:40 crc kubenswrapper[4898]: I0120 03:59:40.198848 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="44d3f9b5ec84966828017ff7ecf7fddfe0954e43f58c20d5a24c6fd4e2708924" exitCode=0 Jan 20 03:59:40 crc kubenswrapper[4898]: I0120 03:59:40.198911 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"44d3f9b5ec84966828017ff7ecf7fddfe0954e43f58c20d5a24c6fd4e2708924"} Jan 20 03:59:40 crc kubenswrapper[4898]: I0120 03:59:40.198990 4898 scope.go:117] "RemoveContainer" containerID="8cbb7ad6d85d39ea7ff2c1068b3057e97901016363b5fcbcec2aac6f311cf2b5" Jan 20 03:59:41 crc kubenswrapper[4898]: I0120 03:59:41.207838 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"4ba5e3101afcfdc26fb12699a1870157681fb7b78baca1e12fdaf156b52381e1"} Jan 20 03:59:45 crc kubenswrapper[4898]: I0120 03:59:45.722083 4898 scope.go:117] "RemoveContainer" containerID="9336477b9ee7e461f6c87e45d05e59a86b8d817b5945c3fa18c56fe5734ab967" Jan 20 03:59:45 crc kubenswrapper[4898]: E0120 03:59:45.722956 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-897rl_openshift-multus(1288aab6-09fa-40a3-8ff8-e00002a32d61)\"" pod="openshift-multus/multus-897rl" podUID="1288aab6-09fa-40a3-8ff8-e00002a32d61" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.191875 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs"] Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.195167 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.200850 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.201215 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.207999 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs"] Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.225744 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02cc424a-e563-4b3d-8fa8-f67b29c67d39-config-volume\") pod \"collect-profiles-29481360-w9zhs\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.225811 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02cc424a-e563-4b3d-8fa8-f67b29c67d39-secret-volume\") pod \"collect-profiles-29481360-w9zhs\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.226319 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtzxh\" (UniqueName: \"kubernetes.io/projected/02cc424a-e563-4b3d-8fa8-f67b29c67d39-kube-api-access-dtzxh\") pod \"collect-profiles-29481360-w9zhs\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.327409 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtzxh\" (UniqueName: \"kubernetes.io/projected/02cc424a-e563-4b3d-8fa8-f67b29c67d39-kube-api-access-dtzxh\") pod \"collect-profiles-29481360-w9zhs\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.327547 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02cc424a-e563-4b3d-8fa8-f67b29c67d39-config-volume\") pod \"collect-profiles-29481360-w9zhs\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.327588 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02cc424a-e563-4b3d-8fa8-f67b29c67d39-secret-volume\") pod \"collect-profiles-29481360-w9zhs\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.330161 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02cc424a-e563-4b3d-8fa8-f67b29c67d39-config-volume\") pod \"collect-profiles-29481360-w9zhs\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.335712 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02cc424a-e563-4b3d-8fa8-f67b29c67d39-secret-volume\") pod \"collect-profiles-29481360-w9zhs\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.358926 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtzxh\" (UniqueName: \"kubernetes.io/projected/02cc424a-e563-4b3d-8fa8-f67b29c67d39-kube-api-access-dtzxh\") pod \"collect-profiles-29481360-w9zhs\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.526795 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: E0120 04:00:00.572274 4898 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager_02cc424a-e563-4b3d-8fa8-f67b29c67d39_0(e3605b31be4138166f81aa8ea72db5df61cdc0c118dcd8e68d2e5c3d2745fb6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 04:00:00 crc kubenswrapper[4898]: E0120 04:00:00.572384 4898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager_02cc424a-e563-4b3d-8fa8-f67b29c67d39_0(e3605b31be4138166f81aa8ea72db5df61cdc0c118dcd8e68d2e5c3d2745fb6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: E0120 04:00:00.572425 4898 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager_02cc424a-e563-4b3d-8fa8-f67b29c67d39_0(e3605b31be4138166f81aa8ea72db5df61cdc0c118dcd8e68d2e5c3d2745fb6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:00 crc kubenswrapper[4898]: E0120 04:00:00.572534 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager(02cc424a-e563-4b3d-8fa8-f67b29c67d39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager(02cc424a-e563-4b3d-8fa8-f67b29c67d39)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager_02cc424a-e563-4b3d-8fa8-f67b29c67d39_0(e3605b31be4138166f81aa8ea72db5df61cdc0c118dcd8e68d2e5c3d2745fb6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" podUID="02cc424a-e563-4b3d-8fa8-f67b29c67d39" Jan 20 04:00:00 crc kubenswrapper[4898]: I0120 04:00:00.721034 4898 scope.go:117] "RemoveContainer" containerID="9336477b9ee7e461f6c87e45d05e59a86b8d817b5945c3fa18c56fe5734ab967" Jan 20 04:00:01 crc kubenswrapper[4898]: I0120 04:00:01.009047 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t5865" Jan 20 04:00:01 crc kubenswrapper[4898]: I0120 04:00:01.332792 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-897rl_1288aab6-09fa-40a3-8ff8-e00002a32d61/kube-multus/2.log" Jan 20 04:00:01 crc kubenswrapper[4898]: I0120 04:00:01.334314 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-897rl_1288aab6-09fa-40a3-8ff8-e00002a32d61/kube-multus/1.log" Jan 20 04:00:01 crc kubenswrapper[4898]: I0120 04:00:01.334409 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:01 crc kubenswrapper[4898]: I0120 04:00:01.334511 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-897rl" event={"ID":"1288aab6-09fa-40a3-8ff8-e00002a32d61","Type":"ContainerStarted","Data":"14a81041d956fe607f816d44c9b02a84d5d6f4ac46e599c28ad78334f1e44099"} Jan 20 04:00:01 crc kubenswrapper[4898]: I0120 04:00:01.334842 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:01 crc kubenswrapper[4898]: E0120 04:00:01.370902 4898 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager_02cc424a-e563-4b3d-8fa8-f67b29c67d39_0(4eec5430f18ac098ee2e034190a3e9eacfa5e38b055a2175b328efeb237a1e24): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 04:00:01 crc kubenswrapper[4898]: E0120 04:00:01.370979 4898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager_02cc424a-e563-4b3d-8fa8-f67b29c67d39_0(4eec5430f18ac098ee2e034190a3e9eacfa5e38b055a2175b328efeb237a1e24): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:01 crc kubenswrapper[4898]: E0120 04:00:01.371016 4898 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager_02cc424a-e563-4b3d-8fa8-f67b29c67d39_0(4eec5430f18ac098ee2e034190a3e9eacfa5e38b055a2175b328efeb237a1e24): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:01 crc kubenswrapper[4898]: E0120 04:00:01.371091 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager(02cc424a-e563-4b3d-8fa8-f67b29c67d39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager(02cc424a-e563-4b3d-8fa8-f67b29c67d39)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29481360-w9zhs_openshift-operator-lifecycle-manager_02cc424a-e563-4b3d-8fa8-f67b29c67d39_0(4eec5430f18ac098ee2e034190a3e9eacfa5e38b055a2175b328efeb237a1e24): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" podUID="02cc424a-e563-4b3d-8fa8-f67b29c67d39" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.206997 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl"] Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.209084 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.210416 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.214356 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl"] Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.252239 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.252284 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phr68\" (UniqueName: \"kubernetes.io/projected/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-kube-api-access-phr68\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.252343 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.353410 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phr68\" (UniqueName: \"kubernetes.io/projected/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-kube-api-access-phr68\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.353569 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.353614 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.354531 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.354544 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.380515 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phr68\" (UniqueName: \"kubernetes.io/projected/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-kube-api-access-phr68\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.523776 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:09 crc kubenswrapper[4898]: I0120 04:00:09.749106 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl"] Jan 20 04:00:10 crc kubenswrapper[4898]: I0120 04:00:10.391667 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" event={"ID":"1ebb9a61-7bd6-434c-b16d-2d08d38ef556","Type":"ContainerStarted","Data":"f971cee1d2a24fa290e235050670f8f8d766a97fee9e6a83f77967eff376f726"} Jan 20 04:00:10 crc kubenswrapper[4898]: I0120 04:00:10.391998 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" event={"ID":"1ebb9a61-7bd6-434c-b16d-2d08d38ef556","Type":"ContainerStarted","Data":"422527e360f6ba693fab7eef5348a690129c0b7887fa4ba3f041e4360406571e"} Jan 20 04:00:11 crc kubenswrapper[4898]: I0120 04:00:11.403294 4898 generic.go:334] "Generic (PLEG): container finished" podID="1ebb9a61-7bd6-434c-b16d-2d08d38ef556" containerID="f971cee1d2a24fa290e235050670f8f8d766a97fee9e6a83f77967eff376f726" exitCode=0 Jan 20 04:00:11 crc kubenswrapper[4898]: I0120 04:00:11.403344 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" event={"ID":"1ebb9a61-7bd6-434c-b16d-2d08d38ef556","Type":"ContainerDied","Data":"f971cee1d2a24fa290e235050670f8f8d766a97fee9e6a83f77967eff376f726"} Jan 20 04:00:13 crc kubenswrapper[4898]: I0120 04:00:13.418374 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" event={"ID":"1ebb9a61-7bd6-434c-b16d-2d08d38ef556","Type":"ContainerStarted","Data":"903b26bbf27d140ce7048e3fb2b085a2ae741fef2fbaa6737c9b84b23b738016"} Jan 20 04:00:14 crc kubenswrapper[4898]: I0120 04:00:14.043570 4898 scope.go:117] "RemoveContainer" containerID="a5987fe772f5a57877b69bf811f4bbbba15ee6778f8e3e8ae66aa1bc501d027b" Jan 20 04:00:14 crc kubenswrapper[4898]: I0120 04:00:14.428130 4898 generic.go:334] "Generic (PLEG): container finished" podID="1ebb9a61-7bd6-434c-b16d-2d08d38ef556" containerID="903b26bbf27d140ce7048e3fb2b085a2ae741fef2fbaa6737c9b84b23b738016" exitCode=0 Jan 20 04:00:14 crc kubenswrapper[4898]: I0120 04:00:14.428264 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" event={"ID":"1ebb9a61-7bd6-434c-b16d-2d08d38ef556","Type":"ContainerDied","Data":"903b26bbf27d140ce7048e3fb2b085a2ae741fef2fbaa6737c9b84b23b738016"} Jan 20 04:00:14 crc kubenswrapper[4898]: I0120 04:00:14.433420 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-897rl_1288aab6-09fa-40a3-8ff8-e00002a32d61/kube-multus/2.log" Jan 20 04:00:15 crc kubenswrapper[4898]: I0120 04:00:15.443488 4898 generic.go:334] "Generic (PLEG): container finished" podID="1ebb9a61-7bd6-434c-b16d-2d08d38ef556" containerID="e113c019d946117cce717cc859a7fcf71c67bd81c093755d21a86168d6cb36d1" exitCode=0 Jan 20 04:00:15 crc kubenswrapper[4898]: I0120 04:00:15.443637 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" event={"ID":"1ebb9a61-7bd6-434c-b16d-2d08d38ef556","Type":"ContainerDied","Data":"e113c019d946117cce717cc859a7fcf71c67bd81c093755d21a86168d6cb36d1"} Jan 20 04:00:15 crc kubenswrapper[4898]: I0120 04:00:15.721030 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:15 crc kubenswrapper[4898]: I0120 04:00:15.722144 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:17 crc kubenswrapper[4898]: I0120 04:00:17.427098 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:17 crc kubenswrapper[4898]: I0120 04:00:17.443796 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs"] Jan 20 04:00:17 crc kubenswrapper[4898]: W0120 04:00:17.459641 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02cc424a_e563_4b3d_8fa8_f67b29c67d39.slice/crio-b6d061f81e82b33a22743aece2cbd8375341239b650dd0565b3508541d1c49cc WatchSource:0}: Error finding container b6d061f81e82b33a22743aece2cbd8375341239b650dd0565b3508541d1c49cc: Status 404 returned error can't find the container with id b6d061f81e82b33a22743aece2cbd8375341239b650dd0565b3508541d1c49cc Jan 20 04:00:17 crc kubenswrapper[4898]: I0120 04:00:17.564328 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-util\") pod \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " Jan 20 04:00:17 crc kubenswrapper[4898]: I0120 04:00:17.564508 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-bundle\") pod \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " Jan 20 04:00:17 crc kubenswrapper[4898]: I0120 04:00:17.564563 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phr68\" (UniqueName: \"kubernetes.io/projected/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-kube-api-access-phr68\") pod \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\" (UID: \"1ebb9a61-7bd6-434c-b16d-2d08d38ef556\") " Jan 20 04:00:17 crc kubenswrapper[4898]: I0120 04:00:17.565059 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-bundle" (OuterVolumeSpecName: "bundle") pod "1ebb9a61-7bd6-434c-b16d-2d08d38ef556" (UID: "1ebb9a61-7bd6-434c-b16d-2d08d38ef556"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:00:17 crc kubenswrapper[4898]: I0120 04:00:17.571910 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-kube-api-access-phr68" (OuterVolumeSpecName: "kube-api-access-phr68") pod "1ebb9a61-7bd6-434c-b16d-2d08d38ef556" (UID: "1ebb9a61-7bd6-434c-b16d-2d08d38ef556"). InnerVolumeSpecName "kube-api-access-phr68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:00:17 crc kubenswrapper[4898]: I0120 04:00:17.574766 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-util" (OuterVolumeSpecName: "util") pod "1ebb9a61-7bd6-434c-b16d-2d08d38ef556" (UID: "1ebb9a61-7bd6-434c-b16d-2d08d38ef556"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:00:17 crc kubenswrapper[4898]: I0120 04:00:17.666077 4898 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-util\") on node \"crc\" DevicePath \"\"" Jan 20 04:00:17 crc kubenswrapper[4898]: I0120 04:00:17.666125 4898 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:00:17 crc kubenswrapper[4898]: I0120 04:00:17.666155 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phr68\" (UniqueName: \"kubernetes.io/projected/1ebb9a61-7bd6-434c-b16d-2d08d38ef556-kube-api-access-phr68\") on node \"crc\" DevicePath \"\"" Jan 20 04:00:18 crc kubenswrapper[4898]: I0120 04:00:18.180666 4898 generic.go:334] "Generic (PLEG): container finished" podID="02cc424a-e563-4b3d-8fa8-f67b29c67d39" containerID="67859f9a87b7cfab253bfa978078536a5fd74144d3ef72065a0b64feacdf9bdf" exitCode=0 Jan 20 04:00:18 crc kubenswrapper[4898]: I0120 04:00:18.180766 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" event={"ID":"02cc424a-e563-4b3d-8fa8-f67b29c67d39","Type":"ContainerDied","Data":"67859f9a87b7cfab253bfa978078536a5fd74144d3ef72065a0b64feacdf9bdf"} Jan 20 04:00:18 crc kubenswrapper[4898]: I0120 04:00:18.180824 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" event={"ID":"02cc424a-e563-4b3d-8fa8-f67b29c67d39","Type":"ContainerStarted","Data":"b6d061f81e82b33a22743aece2cbd8375341239b650dd0565b3508541d1c49cc"} Jan 20 04:00:18 crc kubenswrapper[4898]: I0120 04:00:18.185738 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" event={"ID":"1ebb9a61-7bd6-434c-b16d-2d08d38ef556","Type":"ContainerDied","Data":"422527e360f6ba693fab7eef5348a690129c0b7887fa4ba3f041e4360406571e"} Jan 20 04:00:18 crc kubenswrapper[4898]: I0120 04:00:18.185801 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="422527e360f6ba693fab7eef5348a690129c0b7887fa4ba3f041e4360406571e" Jan 20 04:00:18 crc kubenswrapper[4898]: I0120 04:00:18.185869 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl" Jan 20 04:00:19 crc kubenswrapper[4898]: I0120 04:00:19.542823 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:19 crc kubenswrapper[4898]: I0120 04:00:19.692987 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtzxh\" (UniqueName: \"kubernetes.io/projected/02cc424a-e563-4b3d-8fa8-f67b29c67d39-kube-api-access-dtzxh\") pod \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " Jan 20 04:00:19 crc kubenswrapper[4898]: I0120 04:00:19.693048 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02cc424a-e563-4b3d-8fa8-f67b29c67d39-config-volume\") pod \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " Jan 20 04:00:19 crc kubenswrapper[4898]: I0120 04:00:19.693075 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02cc424a-e563-4b3d-8fa8-f67b29c67d39-secret-volume\") pod \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\" (UID: \"02cc424a-e563-4b3d-8fa8-f67b29c67d39\") " Jan 20 04:00:19 crc kubenswrapper[4898]: I0120 04:00:19.694671 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02cc424a-e563-4b3d-8fa8-f67b29c67d39-config-volume" (OuterVolumeSpecName: "config-volume") pod "02cc424a-e563-4b3d-8fa8-f67b29c67d39" (UID: "02cc424a-e563-4b3d-8fa8-f67b29c67d39"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:00:19 crc kubenswrapper[4898]: I0120 04:00:19.699402 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cc424a-e563-4b3d-8fa8-f67b29c67d39-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "02cc424a-e563-4b3d-8fa8-f67b29c67d39" (UID: "02cc424a-e563-4b3d-8fa8-f67b29c67d39"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:00:19 crc kubenswrapper[4898]: I0120 04:00:19.701645 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02cc424a-e563-4b3d-8fa8-f67b29c67d39-kube-api-access-dtzxh" (OuterVolumeSpecName: "kube-api-access-dtzxh") pod "02cc424a-e563-4b3d-8fa8-f67b29c67d39" (UID: "02cc424a-e563-4b3d-8fa8-f67b29c67d39"). InnerVolumeSpecName "kube-api-access-dtzxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:00:19 crc kubenswrapper[4898]: I0120 04:00:19.794401 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtzxh\" (UniqueName: \"kubernetes.io/projected/02cc424a-e563-4b3d-8fa8-f67b29c67d39-kube-api-access-dtzxh\") on node \"crc\" DevicePath \"\"" Jan 20 04:00:19 crc kubenswrapper[4898]: I0120 04:00:19.794540 4898 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02cc424a-e563-4b3d-8fa8-f67b29c67d39-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 04:00:19 crc kubenswrapper[4898]: I0120 04:00:19.794615 4898 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02cc424a-e563-4b3d-8fa8-f67b29c67d39-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.202727 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" event={"ID":"02cc424a-e563-4b3d-8fa8-f67b29c67d39","Type":"ContainerDied","Data":"b6d061f81e82b33a22743aece2cbd8375341239b650dd0565b3508541d1c49cc"} Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.202800 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6d061f81e82b33a22743aece2cbd8375341239b650dd0565b3508541d1c49cc" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.202750 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.832193 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-plsmq"] Jan 20 04:00:20 crc kubenswrapper[4898]: E0120 04:00:20.832578 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebb9a61-7bd6-434c-b16d-2d08d38ef556" containerName="pull" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.832603 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebb9a61-7bd6-434c-b16d-2d08d38ef556" containerName="pull" Jan 20 04:00:20 crc kubenswrapper[4898]: E0120 04:00:20.832623 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebb9a61-7bd6-434c-b16d-2d08d38ef556" containerName="util" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.832634 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebb9a61-7bd6-434c-b16d-2d08d38ef556" containerName="util" Jan 20 04:00:20 crc kubenswrapper[4898]: E0120 04:00:20.832655 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebb9a61-7bd6-434c-b16d-2d08d38ef556" containerName="extract" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.832666 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebb9a61-7bd6-434c-b16d-2d08d38ef556" containerName="extract" Jan 20 04:00:20 crc kubenswrapper[4898]: E0120 04:00:20.832677 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02cc424a-e563-4b3d-8fa8-f67b29c67d39" containerName="collect-profiles" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.832689 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="02cc424a-e563-4b3d-8fa8-f67b29c67d39" containerName="collect-profiles" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.832857 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="02cc424a-e563-4b3d-8fa8-f67b29c67d39" containerName="collect-profiles" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.832890 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ebb9a61-7bd6-434c-b16d-2d08d38ef556" containerName="extract" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.833428 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-plsmq" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.836415 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-dbttm" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.836741 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.841480 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-plsmq"] Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.846749 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 20 04:00:20 crc kubenswrapper[4898]: I0120 04:00:20.908513 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lljd8\" (UniqueName: \"kubernetes.io/projected/268b5e9a-7692-44e0-989a-bbbeeaee9d51-kube-api-access-lljd8\") pod \"nmstate-operator-646758c888-plsmq\" (UID: \"268b5e9a-7692-44e0-989a-bbbeeaee9d51\") " pod="openshift-nmstate/nmstate-operator-646758c888-plsmq" Jan 20 04:00:21 crc kubenswrapper[4898]: I0120 04:00:21.009904 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lljd8\" (UniqueName: \"kubernetes.io/projected/268b5e9a-7692-44e0-989a-bbbeeaee9d51-kube-api-access-lljd8\") pod \"nmstate-operator-646758c888-plsmq\" (UID: \"268b5e9a-7692-44e0-989a-bbbeeaee9d51\") " pod="openshift-nmstate/nmstate-operator-646758c888-plsmq" Jan 20 04:00:21 crc kubenswrapper[4898]: I0120 04:00:21.033881 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lljd8\" (UniqueName: \"kubernetes.io/projected/268b5e9a-7692-44e0-989a-bbbeeaee9d51-kube-api-access-lljd8\") pod \"nmstate-operator-646758c888-plsmq\" (UID: \"268b5e9a-7692-44e0-989a-bbbeeaee9d51\") " pod="openshift-nmstate/nmstate-operator-646758c888-plsmq" Jan 20 04:00:21 crc kubenswrapper[4898]: I0120 04:00:21.152472 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-plsmq" Jan 20 04:00:21 crc kubenswrapper[4898]: I0120 04:00:21.358036 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-plsmq"] Jan 20 04:00:21 crc kubenswrapper[4898]: W0120 04:00:21.367513 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod268b5e9a_7692_44e0_989a_bbbeeaee9d51.slice/crio-008918923cebac9efa9fb82e06f6ac810678afae029334d90abe7a0fbc776f73 WatchSource:0}: Error finding container 008918923cebac9efa9fb82e06f6ac810678afae029334d90abe7a0fbc776f73: Status 404 returned error can't find the container with id 008918923cebac9efa9fb82e06f6ac810678afae029334d90abe7a0fbc776f73 Jan 20 04:00:22 crc kubenswrapper[4898]: I0120 04:00:22.218810 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-plsmq" event={"ID":"268b5e9a-7692-44e0-989a-bbbeeaee9d51","Type":"ContainerStarted","Data":"008918923cebac9efa9fb82e06f6ac810678afae029334d90abe7a0fbc776f73"} Jan 20 04:00:25 crc kubenswrapper[4898]: I0120 04:00:25.241837 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-plsmq" event={"ID":"268b5e9a-7692-44e0-989a-bbbeeaee9d51","Type":"ContainerStarted","Data":"b526cdb65115b4edd3ae57844076384e52685540037fc6c733afa47dfdb590e5"} Jan 20 04:00:25 crc kubenswrapper[4898]: I0120 04:00:25.274802 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-plsmq" podStartSLOduration=2.3304202099999998 podStartE2EDuration="5.2747515s" podCreationTimestamp="2026-01-20 04:00:20 +0000 UTC" firstStartedPulling="2026-01-20 04:00:21.371283109 +0000 UTC m=+667.971070978" lastFinishedPulling="2026-01-20 04:00:24.315614399 +0000 UTC m=+670.915402268" observedRunningTime="2026-01-20 04:00:25.273065747 +0000 UTC m=+671.872853646" watchObservedRunningTime="2026-01-20 04:00:25.2747515 +0000 UTC m=+671.874539399" Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.908158 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6zv69"] Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.909786 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-6zv69" Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.911663 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-sg8vd" Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.934953 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh"] Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.936224 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.938778 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.939648 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6zv69"] Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.941923 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gqzx\" (UniqueName: \"kubernetes.io/projected/2575ee56-4994-4cc5-b686-9974bc3ba295-kube-api-access-9gqzx\") pod \"nmstate-metrics-54757c584b-6zv69\" (UID: \"2575ee56-4994-4cc5-b686-9974bc3ba295\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6zv69" Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.942001 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f207f14d-bc52-4b04-b325-05ccc1b4351a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2llzh\" (UID: \"f207f14d-bc52-4b04-b325-05ccc1b4351a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.942055 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8q89\" (UniqueName: \"kubernetes.io/projected/f207f14d-bc52-4b04-b325-05ccc1b4351a-kube-api-access-m8q89\") pod \"nmstate-webhook-8474b5b9d8-2llzh\" (UID: \"f207f14d-bc52-4b04-b325-05ccc1b4351a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.959420 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-67l7k"] Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.960269 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:29 crc kubenswrapper[4898]: I0120 04:00:29.976875 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh"] Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.043491 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8q89\" (UniqueName: \"kubernetes.io/projected/f207f14d-bc52-4b04-b325-05ccc1b4351a-kube-api-access-m8q89\") pod \"nmstate-webhook-8474b5b9d8-2llzh\" (UID: \"f207f14d-bc52-4b04-b325-05ccc1b4351a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.043834 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ed4eda94-be5e-496a-922e-96edad89ca92-ovs-socket\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.043868 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ed4eda94-be5e-496a-922e-96edad89ca92-dbus-socket\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.043892 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gqzx\" (UniqueName: \"kubernetes.io/projected/2575ee56-4994-4cc5-b686-9974bc3ba295-kube-api-access-9gqzx\") pod \"nmstate-metrics-54757c584b-6zv69\" (UID: \"2575ee56-4994-4cc5-b686-9974bc3ba295\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6zv69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.043911 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnk64\" (UniqueName: \"kubernetes.io/projected/ed4eda94-be5e-496a-922e-96edad89ca92-kube-api-access-pnk64\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.044156 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ed4eda94-be5e-496a-922e-96edad89ca92-nmstate-lock\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.044242 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f207f14d-bc52-4b04-b325-05ccc1b4351a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2llzh\" (UID: \"f207f14d-bc52-4b04-b325-05ccc1b4351a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.062489 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gqzx\" (UniqueName: \"kubernetes.io/projected/2575ee56-4994-4cc5-b686-9974bc3ba295-kube-api-access-9gqzx\") pod \"nmstate-metrics-54757c584b-6zv69\" (UID: \"2575ee56-4994-4cc5-b686-9974bc3ba295\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6zv69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.062912 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8q89\" (UniqueName: \"kubernetes.io/projected/f207f14d-bc52-4b04-b325-05ccc1b4351a-kube-api-access-m8q89\") pod \"nmstate-webhook-8474b5b9d8-2llzh\" (UID: \"f207f14d-bc52-4b04-b325-05ccc1b4351a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.066813 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f207f14d-bc52-4b04-b325-05ccc1b4351a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2llzh\" (UID: \"f207f14d-bc52-4b04-b325-05ccc1b4351a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.068857 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64"] Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.071633 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.082344 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64"] Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.083310 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.083368 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.083575 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-mpwqn" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.145271 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ed4eda94-be5e-496a-922e-96edad89ca92-ovs-socket\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.145323 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b61979e3-553f-4098-a721-419fdc230e8b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bhn64\" (UID: \"b61979e3-553f-4098-a721-419fdc230e8b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.145349 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b61979e3-553f-4098-a721-419fdc230e8b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bhn64\" (UID: \"b61979e3-553f-4098-a721-419fdc230e8b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.145370 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ed4eda94-be5e-496a-922e-96edad89ca92-dbus-socket\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.145395 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnk64\" (UniqueName: \"kubernetes.io/projected/ed4eda94-be5e-496a-922e-96edad89ca92-kube-api-access-pnk64\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.145411 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt5jz\" (UniqueName: \"kubernetes.io/projected/b61979e3-553f-4098-a721-419fdc230e8b-kube-api-access-pt5jz\") pod \"nmstate-console-plugin-7754f76f8b-bhn64\" (UID: \"b61979e3-553f-4098-a721-419fdc230e8b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.145437 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ed4eda94-be5e-496a-922e-96edad89ca92-ovs-socket\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.145447 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ed4eda94-be5e-496a-922e-96edad89ca92-nmstate-lock\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.145692 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ed4eda94-be5e-496a-922e-96edad89ca92-nmstate-lock\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.145791 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ed4eda94-be5e-496a-922e-96edad89ca92-dbus-socket\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.163411 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnk64\" (UniqueName: \"kubernetes.io/projected/ed4eda94-be5e-496a-922e-96edad89ca92-kube-api-access-pnk64\") pod \"nmstate-handler-67l7k\" (UID: \"ed4eda94-be5e-496a-922e-96edad89ca92\") " pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.228061 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-6zv69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.244763 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-ff94ddbd5-94r69"] Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.245627 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.246387 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b61979e3-553f-4098-a721-419fdc230e8b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bhn64\" (UID: \"b61979e3-553f-4098-a721-419fdc230e8b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.246453 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b61979e3-553f-4098-a721-419fdc230e8b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bhn64\" (UID: \"b61979e3-553f-4098-a721-419fdc230e8b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.246485 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt5jz\" (UniqueName: \"kubernetes.io/projected/b61979e3-553f-4098-a721-419fdc230e8b-kube-api-access-pt5jz\") pod \"nmstate-console-plugin-7754f76f8b-bhn64\" (UID: \"b61979e3-553f-4098-a721-419fdc230e8b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.248230 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b61979e3-553f-4098-a721-419fdc230e8b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bhn64\" (UID: \"b61979e3-553f-4098-a721-419fdc230e8b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.250773 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b61979e3-553f-4098-a721-419fdc230e8b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bhn64\" (UID: \"b61979e3-553f-4098-a721-419fdc230e8b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.256922 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.263951 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-ff94ddbd5-94r69"] Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.277317 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.277755 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt5jz\" (UniqueName: \"kubernetes.io/projected/b61979e3-553f-4098-a721-419fdc230e8b-kube-api-access-pt5jz\") pod \"nmstate-console-plugin-7754f76f8b-bhn64\" (UID: \"b61979e3-553f-4098-a721-419fdc230e8b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.347500 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cdbc387-b8a0-4195-8cc7-781795f499d2-console-oauth-config\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.347897 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-oauth-serving-cert\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.347950 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-trusted-ca-bundle\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.347992 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-service-ca\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.348023 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cdbc387-b8a0-4195-8cc7-781795f499d2-console-serving-cert\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.348068 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn4bc\" (UniqueName: \"kubernetes.io/projected/5cdbc387-b8a0-4195-8cc7-781795f499d2-kube-api-access-jn4bc\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.348091 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-console-config\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.397993 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.448562 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cdbc387-b8a0-4195-8cc7-781795f499d2-console-oauth-config\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.448629 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-oauth-serving-cert\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.448698 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-trusted-ca-bundle\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.448723 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-service-ca\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.448747 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cdbc387-b8a0-4195-8cc7-781795f499d2-console-serving-cert\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.448817 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn4bc\" (UniqueName: \"kubernetes.io/projected/5cdbc387-b8a0-4195-8cc7-781795f499d2-kube-api-access-jn4bc\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.448865 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-console-config\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.449848 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-oauth-serving-cert\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.449920 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-service-ca\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.450202 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-console-config\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.450901 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cdbc387-b8a0-4195-8cc7-781795f499d2-trusted-ca-bundle\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.456587 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cdbc387-b8a0-4195-8cc7-781795f499d2-console-oauth-config\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.456603 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cdbc387-b8a0-4195-8cc7-781795f499d2-console-serving-cert\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.471915 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn4bc\" (UniqueName: \"kubernetes.io/projected/5cdbc387-b8a0-4195-8cc7-781795f499d2-kube-api-access-jn4bc\") pod \"console-ff94ddbd5-94r69\" (UID: \"5cdbc387-b8a0-4195-8cc7-781795f499d2\") " pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.475371 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6zv69"] Jan 20 04:00:30 crc kubenswrapper[4898]: W0120 04:00:30.480508 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2575ee56_4994_4cc5_b686_9974bc3ba295.slice/crio-b84815fc522901b719e493067675d0875e3235421c44428dbf51188e4a7da128 WatchSource:0}: Error finding container b84815fc522901b719e493067675d0875e3235421c44428dbf51188e4a7da128: Status 404 returned error can't find the container with id b84815fc522901b719e493067675d0875e3235421c44428dbf51188e4a7da128 Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.590522 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64"] Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.592022 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:30 crc kubenswrapper[4898]: W0120 04:00:30.595737 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb61979e3_553f_4098_a721_419fdc230e8b.slice/crio-cad0d0db17b5d919dd6ad7af8d8418cb9bd7730732cc7f78dd3ed0d6af38d139 WatchSource:0}: Error finding container cad0d0db17b5d919dd6ad7af8d8418cb9bd7730732cc7f78dd3ed0d6af38d139: Status 404 returned error can't find the container with id cad0d0db17b5d919dd6ad7af8d8418cb9bd7730732cc7f78dd3ed0d6af38d139 Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.709017 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh"] Jan 20 04:00:30 crc kubenswrapper[4898]: W0120 04:00:30.715988 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf207f14d_bc52_4b04_b325_05ccc1b4351a.slice/crio-af5c28b88bf5db96edd9c930f066e3ec622fbd7efd4d33ee906e96292e7dbe87 WatchSource:0}: Error finding container af5c28b88bf5db96edd9c930f066e3ec622fbd7efd4d33ee906e96292e7dbe87: Status 404 returned error can't find the container with id af5c28b88bf5db96edd9c930f066e3ec622fbd7efd4d33ee906e96292e7dbe87 Jan 20 04:00:30 crc kubenswrapper[4898]: I0120 04:00:30.779879 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-ff94ddbd5-94r69"] Jan 20 04:00:30 crc kubenswrapper[4898]: W0120 04:00:30.785168 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cdbc387_b8a0_4195_8cc7_781795f499d2.slice/crio-03a10facb4a143f3e7e20e62f840be93eb140cc47b23f966f52427b300691443 WatchSource:0}: Error finding container 03a10facb4a143f3e7e20e62f840be93eb140cc47b23f966f52427b300691443: Status 404 returned error can't find the container with id 03a10facb4a143f3e7e20e62f840be93eb140cc47b23f966f52427b300691443 Jan 20 04:00:31 crc kubenswrapper[4898]: I0120 04:00:31.286578 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" event={"ID":"b61979e3-553f-4098-a721-419fdc230e8b","Type":"ContainerStarted","Data":"cad0d0db17b5d919dd6ad7af8d8418cb9bd7730732cc7f78dd3ed0d6af38d139"} Jan 20 04:00:31 crc kubenswrapper[4898]: I0120 04:00:31.288665 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-67l7k" event={"ID":"ed4eda94-be5e-496a-922e-96edad89ca92","Type":"ContainerStarted","Data":"785638f24e1019c404ba53e0c5b10ab9845efb61db237b66a498c4e2beb6b747"} Jan 20 04:00:31 crc kubenswrapper[4898]: I0120 04:00:31.290214 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" event={"ID":"f207f14d-bc52-4b04-b325-05ccc1b4351a","Type":"ContainerStarted","Data":"af5c28b88bf5db96edd9c930f066e3ec622fbd7efd4d33ee906e96292e7dbe87"} Jan 20 04:00:31 crc kubenswrapper[4898]: I0120 04:00:31.291633 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6zv69" event={"ID":"2575ee56-4994-4cc5-b686-9974bc3ba295","Type":"ContainerStarted","Data":"b84815fc522901b719e493067675d0875e3235421c44428dbf51188e4a7da128"} Jan 20 04:00:31 crc kubenswrapper[4898]: I0120 04:00:31.295011 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-ff94ddbd5-94r69" event={"ID":"5cdbc387-b8a0-4195-8cc7-781795f499d2","Type":"ContainerStarted","Data":"2586d6f0c4fcea5829b4c0880840e7378ea6003ed5df0fd5704b937900b09276"} Jan 20 04:00:31 crc kubenswrapper[4898]: I0120 04:00:31.295056 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-ff94ddbd5-94r69" event={"ID":"5cdbc387-b8a0-4195-8cc7-781795f499d2","Type":"ContainerStarted","Data":"03a10facb4a143f3e7e20e62f840be93eb140cc47b23f966f52427b300691443"} Jan 20 04:00:31 crc kubenswrapper[4898]: I0120 04:00:31.321699 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-ff94ddbd5-94r69" podStartSLOduration=1.321679532 podStartE2EDuration="1.321679532s" podCreationTimestamp="2026-01-20 04:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:00:31.319817184 +0000 UTC m=+677.919605043" watchObservedRunningTime="2026-01-20 04:00:31.321679532 +0000 UTC m=+677.921467391" Jan 20 04:00:33 crc kubenswrapper[4898]: I0120 04:00:33.316865 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" event={"ID":"b61979e3-553f-4098-a721-419fdc230e8b","Type":"ContainerStarted","Data":"b4d67170e337385c0a0516bea5622d45e2fba81c6df3e0a78d52ce0f9cd6bcb6"} Jan 20 04:00:33 crc kubenswrapper[4898]: I0120 04:00:33.330811 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bhn64" podStartSLOduration=0.801004182 podStartE2EDuration="3.330793018s" podCreationTimestamp="2026-01-20 04:00:30 +0000 UTC" firstStartedPulling="2026-01-20 04:00:30.600860193 +0000 UTC m=+677.200648052" lastFinishedPulling="2026-01-20 04:00:33.130649029 +0000 UTC m=+679.730436888" observedRunningTime="2026-01-20 04:00:33.329750226 +0000 UTC m=+679.929538085" watchObservedRunningTime="2026-01-20 04:00:33.330793018 +0000 UTC m=+679.930580877" Jan 20 04:00:34 crc kubenswrapper[4898]: I0120 04:00:34.323645 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" event={"ID":"f207f14d-bc52-4b04-b325-05ccc1b4351a","Type":"ContainerStarted","Data":"86933c81b07bdd9674138dba8d0bc3fda0533527eef9c0221b50e2ad2737ea06"} Jan 20 04:00:34 crc kubenswrapper[4898]: I0120 04:00:34.324107 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" Jan 20 04:00:34 crc kubenswrapper[4898]: I0120 04:00:34.328079 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6zv69" event={"ID":"2575ee56-4994-4cc5-b686-9974bc3ba295","Type":"ContainerStarted","Data":"ed71f1b118b49b4fc46a826f9cc3f4df686f072bc80a5bbd5b7a8372e3c5f02b"} Jan 20 04:00:34 crc kubenswrapper[4898]: I0120 04:00:34.346360 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" podStartSLOduration=1.974712027 podStartE2EDuration="5.34633744s" podCreationTimestamp="2026-01-20 04:00:29 +0000 UTC" firstStartedPulling="2026-01-20 04:00:30.71862189 +0000 UTC m=+677.318409749" lastFinishedPulling="2026-01-20 04:00:34.090247293 +0000 UTC m=+680.690035162" observedRunningTime="2026-01-20 04:00:34.343623355 +0000 UTC m=+680.943411214" watchObservedRunningTime="2026-01-20 04:00:34.34633744 +0000 UTC m=+680.946125299" Jan 20 04:00:35 crc kubenswrapper[4898]: I0120 04:00:35.337640 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-67l7k" event={"ID":"ed4eda94-be5e-496a-922e-96edad89ca92","Type":"ContainerStarted","Data":"4348deb7cdb73ddd72267fd79d76a021f89eacf7f7c18b30404b0d63abc910cf"} Jan 20 04:00:35 crc kubenswrapper[4898]: I0120 04:00:35.355446 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-67l7k" podStartSLOduration=2.5526203929999998 podStartE2EDuration="6.355389288s" podCreationTimestamp="2026-01-20 04:00:29 +0000 UTC" firstStartedPulling="2026-01-20 04:00:30.306919485 +0000 UTC m=+676.906707344" lastFinishedPulling="2026-01-20 04:00:34.10968838 +0000 UTC m=+680.709476239" observedRunningTime="2026-01-20 04:00:35.354955345 +0000 UTC m=+681.954743244" watchObservedRunningTime="2026-01-20 04:00:35.355389288 +0000 UTC m=+681.955177147" Jan 20 04:00:36 crc kubenswrapper[4898]: I0120 04:00:36.343615 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:38 crc kubenswrapper[4898]: I0120 04:00:38.358835 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6zv69" event={"ID":"2575ee56-4994-4cc5-b686-9974bc3ba295","Type":"ContainerStarted","Data":"f75d96a079c6ebbb2e39c3912cd7dcb59de511c56f5cf590aa7a73f26c6f7139"} Jan 20 04:00:38 crc kubenswrapper[4898]: I0120 04:00:38.386204 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-6zv69" podStartSLOduration=2.532064541 podStartE2EDuration="9.386182809s" podCreationTimestamp="2026-01-20 04:00:29 +0000 UTC" firstStartedPulling="2026-01-20 04:00:30.484566462 +0000 UTC m=+677.084354321" lastFinishedPulling="2026-01-20 04:00:37.33868469 +0000 UTC m=+683.938472589" observedRunningTime="2026-01-20 04:00:38.382229195 +0000 UTC m=+684.982017054" watchObservedRunningTime="2026-01-20 04:00:38.386182809 +0000 UTC m=+684.985970678" Jan 20 04:00:40 crc kubenswrapper[4898]: I0120 04:00:40.321230 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-67l7k" Jan 20 04:00:40 crc kubenswrapper[4898]: I0120 04:00:40.592608 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:40 crc kubenswrapper[4898]: I0120 04:00:40.594248 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:40 crc kubenswrapper[4898]: I0120 04:00:40.601363 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:41 crc kubenswrapper[4898]: I0120 04:00:41.387657 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-ff94ddbd5-94r69" Jan 20 04:00:41 crc kubenswrapper[4898]: I0120 04:00:41.458152 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ctlvr"] Jan 20 04:00:50 crc kubenswrapper[4898]: I0120 04:00:50.263964 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2llzh" Jan 20 04:01:06 crc kubenswrapper[4898]: I0120 04:01:06.516482 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ctlvr" podUID="480eb1b9-9ac2-4353-9216-751da9b33e4f" containerName="console" containerID="cri-o://649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa" gracePeriod=15 Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.037259 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q"] Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.039224 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.042001 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.054933 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q"] Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.202706 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.203041 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4x4\" (UniqueName: \"kubernetes.io/projected/fb18e186-7d5a-4bb9-b100-6f257ee07319-kube-api-access-wk4x4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.203106 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.305189 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.305289 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.305370 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk4x4\" (UniqueName: \"kubernetes.io/projected/fb18e186-7d5a-4bb9-b100-6f257ee07319-kube-api-access-wk4x4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.306262 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.306341 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.339813 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk4x4\" (UniqueName: \"kubernetes.io/projected/fb18e186-7d5a-4bb9-b100-6f257ee07319-kube-api-access-wk4x4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.362528 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.498321 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ctlvr_480eb1b9-9ac2-4353-9216-751da9b33e4f/console/0.log" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.498387 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.571570 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ctlvr_480eb1b9-9ac2-4353-9216-751da9b33e4f/console/0.log" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.571604 4898 generic.go:334] "Generic (PLEG): container finished" podID="480eb1b9-9ac2-4353-9216-751da9b33e4f" containerID="649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa" exitCode=2 Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.571632 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ctlvr" event={"ID":"480eb1b9-9ac2-4353-9216-751da9b33e4f","Type":"ContainerDied","Data":"649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa"} Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.571658 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ctlvr" event={"ID":"480eb1b9-9ac2-4353-9216-751da9b33e4f","Type":"ContainerDied","Data":"b47d6c41e5fd6f0ce0131919f584031a09f2b36459f9f89edaca7079e5ff839d"} Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.571672 4898 scope.go:117] "RemoveContainer" containerID="649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.571783 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ctlvr" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.593950 4898 scope.go:117] "RemoveContainer" containerID="649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa" Jan 20 04:01:07 crc kubenswrapper[4898]: E0120 04:01:07.594552 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa\": container with ID starting with 649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa not found: ID does not exist" containerID="649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.594597 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa"} err="failed to get container status \"649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa\": rpc error: code = NotFound desc = could not find container \"649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa\": container with ID starting with 649b449732afa472fff9e5d64f8ca0e28390c824dd355fcda7acd35a9495d0aa not found: ID does not exist" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.612789 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-trusted-ca-bundle\") pod \"480eb1b9-9ac2-4353-9216-751da9b33e4f\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.612852 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8fzg\" (UniqueName: \"kubernetes.io/projected/480eb1b9-9ac2-4353-9216-751da9b33e4f-kube-api-access-t8fzg\") pod \"480eb1b9-9ac2-4353-9216-751da9b33e4f\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.612893 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-oauth-serving-cert\") pod \"480eb1b9-9ac2-4353-9216-751da9b33e4f\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.612912 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-service-ca\") pod \"480eb1b9-9ac2-4353-9216-751da9b33e4f\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.612932 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-serving-cert\") pod \"480eb1b9-9ac2-4353-9216-751da9b33e4f\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.612970 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-config\") pod \"480eb1b9-9ac2-4353-9216-751da9b33e4f\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.613019 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-oauth-config\") pod \"480eb1b9-9ac2-4353-9216-751da9b33e4f\" (UID: \"480eb1b9-9ac2-4353-9216-751da9b33e4f\") " Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.613685 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "480eb1b9-9ac2-4353-9216-751da9b33e4f" (UID: "480eb1b9-9ac2-4353-9216-751da9b33e4f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.614240 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "480eb1b9-9ac2-4353-9216-751da9b33e4f" (UID: "480eb1b9-9ac2-4353-9216-751da9b33e4f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.614391 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-config" (OuterVolumeSpecName: "console-config") pod "480eb1b9-9ac2-4353-9216-751da9b33e4f" (UID: "480eb1b9-9ac2-4353-9216-751da9b33e4f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.614677 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q"] Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.615096 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-service-ca" (OuterVolumeSpecName: "service-ca") pod "480eb1b9-9ac2-4353-9216-751da9b33e4f" (UID: "480eb1b9-9ac2-4353-9216-751da9b33e4f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.618648 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480eb1b9-9ac2-4353-9216-751da9b33e4f-kube-api-access-t8fzg" (OuterVolumeSpecName: "kube-api-access-t8fzg") pod "480eb1b9-9ac2-4353-9216-751da9b33e4f" (UID: "480eb1b9-9ac2-4353-9216-751da9b33e4f"). InnerVolumeSpecName "kube-api-access-t8fzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.618680 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "480eb1b9-9ac2-4353-9216-751da9b33e4f" (UID: "480eb1b9-9ac2-4353-9216-751da9b33e4f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.619482 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "480eb1b9-9ac2-4353-9216-751da9b33e4f" (UID: "480eb1b9-9ac2-4353-9216-751da9b33e4f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.714989 4898 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.715027 4898 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.715040 4898 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.715051 4898 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.715063 4898 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/480eb1b9-9ac2-4353-9216-751da9b33e4f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.715074 4898 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/480eb1b9-9ac2-4353-9216-751da9b33e4f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.715085 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8fzg\" (UniqueName: \"kubernetes.io/projected/480eb1b9-9ac2-4353-9216-751da9b33e4f-kube-api-access-t8fzg\") on node \"crc\" DevicePath \"\"" Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.917620 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ctlvr"] Jan 20 04:01:07 crc kubenswrapper[4898]: I0120 04:01:07.927219 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ctlvr"] Jan 20 04:01:08 crc kubenswrapper[4898]: I0120 04:01:08.586873 4898 generic.go:334] "Generic (PLEG): container finished" podID="fb18e186-7d5a-4bb9-b100-6f257ee07319" containerID="472e4ac15f312979c64a8c67157717e1a6cc5d5edbc90fd6f896f9afab8f6bf2" exitCode=0 Jan 20 04:01:08 crc kubenswrapper[4898]: I0120 04:01:08.586938 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" event={"ID":"fb18e186-7d5a-4bb9-b100-6f257ee07319","Type":"ContainerDied","Data":"472e4ac15f312979c64a8c67157717e1a6cc5d5edbc90fd6f896f9afab8f6bf2"} Jan 20 04:01:08 crc kubenswrapper[4898]: I0120 04:01:08.586984 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" event={"ID":"fb18e186-7d5a-4bb9-b100-6f257ee07319","Type":"ContainerStarted","Data":"0160db3041b39a61a7680bf3536f44037b655c35674c3305927a0cb6f04a778d"} Jan 20 04:01:09 crc kubenswrapper[4898]: I0120 04:01:09.733804 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480eb1b9-9ac2-4353-9216-751da9b33e4f" path="/var/lib/kubelet/pods/480eb1b9-9ac2-4353-9216-751da9b33e4f/volumes" Jan 20 04:01:10 crc kubenswrapper[4898]: I0120 04:01:10.606616 4898 generic.go:334] "Generic (PLEG): container finished" podID="fb18e186-7d5a-4bb9-b100-6f257ee07319" containerID="dad94a379001e55690c1e9896113f37ab30cab5f296d427da1808e1072af1ba5" exitCode=0 Jan 20 04:01:10 crc kubenswrapper[4898]: I0120 04:01:10.606691 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" event={"ID":"fb18e186-7d5a-4bb9-b100-6f257ee07319","Type":"ContainerDied","Data":"dad94a379001e55690c1e9896113f37ab30cab5f296d427da1808e1072af1ba5"} Jan 20 04:01:11 crc kubenswrapper[4898]: I0120 04:01:11.621202 4898 generic.go:334] "Generic (PLEG): container finished" podID="fb18e186-7d5a-4bb9-b100-6f257ee07319" containerID="ff00c39287d2d27b11369def609b3e1b3534dadb7e79c3f3a3d2859121ad0e9e" exitCode=0 Jan 20 04:01:11 crc kubenswrapper[4898]: I0120 04:01:11.621284 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" event={"ID":"fb18e186-7d5a-4bb9-b100-6f257ee07319","Type":"ContainerDied","Data":"ff00c39287d2d27b11369def609b3e1b3534dadb7e79c3f3a3d2859121ad0e9e"} Jan 20 04:01:12 crc kubenswrapper[4898]: I0120 04:01:12.964101 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.104094 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-util\") pod \"fb18e186-7d5a-4bb9-b100-6f257ee07319\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.104194 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk4x4\" (UniqueName: \"kubernetes.io/projected/fb18e186-7d5a-4bb9-b100-6f257ee07319-kube-api-access-wk4x4\") pod \"fb18e186-7d5a-4bb9-b100-6f257ee07319\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.104225 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-bundle\") pod \"fb18e186-7d5a-4bb9-b100-6f257ee07319\" (UID: \"fb18e186-7d5a-4bb9-b100-6f257ee07319\") " Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.105801 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-bundle" (OuterVolumeSpecName: "bundle") pod "fb18e186-7d5a-4bb9-b100-6f257ee07319" (UID: "fb18e186-7d5a-4bb9-b100-6f257ee07319"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.111507 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb18e186-7d5a-4bb9-b100-6f257ee07319-kube-api-access-wk4x4" (OuterVolumeSpecName: "kube-api-access-wk4x4") pod "fb18e186-7d5a-4bb9-b100-6f257ee07319" (UID: "fb18e186-7d5a-4bb9-b100-6f257ee07319"). InnerVolumeSpecName "kube-api-access-wk4x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.122470 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-util" (OuterVolumeSpecName: "util") pod "fb18e186-7d5a-4bb9-b100-6f257ee07319" (UID: "fb18e186-7d5a-4bb9-b100-6f257ee07319"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.205498 4898 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.205532 4898 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb18e186-7d5a-4bb9-b100-6f257ee07319-util\") on node \"crc\" DevicePath \"\"" Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.205545 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk4x4\" (UniqueName: \"kubernetes.io/projected/fb18e186-7d5a-4bb9-b100-6f257ee07319-kube-api-access-wk4x4\") on node \"crc\" DevicePath \"\"" Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.639065 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" event={"ID":"fb18e186-7d5a-4bb9-b100-6f257ee07319","Type":"ContainerDied","Data":"0160db3041b39a61a7680bf3536f44037b655c35674c3305927a0cb6f04a778d"} Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.639451 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0160db3041b39a61a7680bf3536f44037b655c35674c3305927a0cb6f04a778d" Jan 20 04:01:13 crc kubenswrapper[4898]: I0120 04:01:13.639129 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.270068 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz"] Jan 20 04:01:27 crc kubenswrapper[4898]: E0120 04:01:27.270946 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb18e186-7d5a-4bb9-b100-6f257ee07319" containerName="util" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.270961 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb18e186-7d5a-4bb9-b100-6f257ee07319" containerName="util" Jan 20 04:01:27 crc kubenswrapper[4898]: E0120 04:01:27.270981 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb18e186-7d5a-4bb9-b100-6f257ee07319" containerName="extract" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.270989 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb18e186-7d5a-4bb9-b100-6f257ee07319" containerName="extract" Jan 20 04:01:27 crc kubenswrapper[4898]: E0120 04:01:27.270999 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb18e186-7d5a-4bb9-b100-6f257ee07319" containerName="pull" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.271008 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb18e186-7d5a-4bb9-b100-6f257ee07319" containerName="pull" Jan 20 04:01:27 crc kubenswrapper[4898]: E0120 04:01:27.271024 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480eb1b9-9ac2-4353-9216-751da9b33e4f" containerName="console" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.271032 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="480eb1b9-9ac2-4353-9216-751da9b33e4f" containerName="console" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.271160 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb18e186-7d5a-4bb9-b100-6f257ee07319" containerName="extract" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.271171 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="480eb1b9-9ac2-4353-9216-751da9b33e4f" containerName="console" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.271686 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.274544 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.274658 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.274657 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.277281 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-z2dsp" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.277505 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.287056 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz"] Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.418919 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wrp9\" (UniqueName: \"kubernetes.io/projected/ca361fa9-3501-4e45-b43a-5344a65efac5-kube-api-access-6wrp9\") pod \"metallb-operator-controller-manager-7bfbbff6f9-rppqz\" (UID: \"ca361fa9-3501-4e45-b43a-5344a65efac5\") " pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.419002 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca361fa9-3501-4e45-b43a-5344a65efac5-apiservice-cert\") pod \"metallb-operator-controller-manager-7bfbbff6f9-rppqz\" (UID: \"ca361fa9-3501-4e45-b43a-5344a65efac5\") " pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.419025 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca361fa9-3501-4e45-b43a-5344a65efac5-webhook-cert\") pod \"metallb-operator-controller-manager-7bfbbff6f9-rppqz\" (UID: \"ca361fa9-3501-4e45-b43a-5344a65efac5\") " pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.520373 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wrp9\" (UniqueName: \"kubernetes.io/projected/ca361fa9-3501-4e45-b43a-5344a65efac5-kube-api-access-6wrp9\") pod \"metallb-operator-controller-manager-7bfbbff6f9-rppqz\" (UID: \"ca361fa9-3501-4e45-b43a-5344a65efac5\") " pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.520472 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca361fa9-3501-4e45-b43a-5344a65efac5-apiservice-cert\") pod \"metallb-operator-controller-manager-7bfbbff6f9-rppqz\" (UID: \"ca361fa9-3501-4e45-b43a-5344a65efac5\") " pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.520496 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca361fa9-3501-4e45-b43a-5344a65efac5-webhook-cert\") pod \"metallb-operator-controller-manager-7bfbbff6f9-rppqz\" (UID: \"ca361fa9-3501-4e45-b43a-5344a65efac5\") " pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.526107 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca361fa9-3501-4e45-b43a-5344a65efac5-apiservice-cert\") pod \"metallb-operator-controller-manager-7bfbbff6f9-rppqz\" (UID: \"ca361fa9-3501-4e45-b43a-5344a65efac5\") " pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.528941 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca361fa9-3501-4e45-b43a-5344a65efac5-webhook-cert\") pod \"metallb-operator-controller-manager-7bfbbff6f9-rppqz\" (UID: \"ca361fa9-3501-4e45-b43a-5344a65efac5\") " pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.543989 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wrp9\" (UniqueName: \"kubernetes.io/projected/ca361fa9-3501-4e45-b43a-5344a65efac5-kube-api-access-6wrp9\") pod \"metallb-operator-controller-manager-7bfbbff6f9-rppqz\" (UID: \"ca361fa9-3501-4e45-b43a-5344a65efac5\") " pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.593407 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.594337 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j"] Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.595071 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.599972 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.600394 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nh98g" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.609505 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.613853 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j"] Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.621833 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4436d8b-9043-4dfd-8f29-e6dd7bee46bc-apiservice-cert\") pod \"metallb-operator-webhook-server-656bd465d-j8r6j\" (UID: \"a4436d8b-9043-4dfd-8f29-e6dd7bee46bc\") " pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.621911 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsn5p\" (UniqueName: \"kubernetes.io/projected/a4436d8b-9043-4dfd-8f29-e6dd7bee46bc-kube-api-access-vsn5p\") pod \"metallb-operator-webhook-server-656bd465d-j8r6j\" (UID: \"a4436d8b-9043-4dfd-8f29-e6dd7bee46bc\") " pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.621946 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4436d8b-9043-4dfd-8f29-e6dd7bee46bc-webhook-cert\") pod \"metallb-operator-webhook-server-656bd465d-j8r6j\" (UID: \"a4436d8b-9043-4dfd-8f29-e6dd7bee46bc\") " pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.723324 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4436d8b-9043-4dfd-8f29-e6dd7bee46bc-apiservice-cert\") pod \"metallb-operator-webhook-server-656bd465d-j8r6j\" (UID: \"a4436d8b-9043-4dfd-8f29-e6dd7bee46bc\") " pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.723361 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsn5p\" (UniqueName: \"kubernetes.io/projected/a4436d8b-9043-4dfd-8f29-e6dd7bee46bc-kube-api-access-vsn5p\") pod \"metallb-operator-webhook-server-656bd465d-j8r6j\" (UID: \"a4436d8b-9043-4dfd-8f29-e6dd7bee46bc\") " pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.723381 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4436d8b-9043-4dfd-8f29-e6dd7bee46bc-webhook-cert\") pod \"metallb-operator-webhook-server-656bd465d-j8r6j\" (UID: \"a4436d8b-9043-4dfd-8f29-e6dd7bee46bc\") " pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.729238 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4436d8b-9043-4dfd-8f29-e6dd7bee46bc-webhook-cert\") pod \"metallb-operator-webhook-server-656bd465d-j8r6j\" (UID: \"a4436d8b-9043-4dfd-8f29-e6dd7bee46bc\") " pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.733688 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4436d8b-9043-4dfd-8f29-e6dd7bee46bc-apiservice-cert\") pod \"metallb-operator-webhook-server-656bd465d-j8r6j\" (UID: \"a4436d8b-9043-4dfd-8f29-e6dd7bee46bc\") " pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.752336 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsn5p\" (UniqueName: \"kubernetes.io/projected/a4436d8b-9043-4dfd-8f29-e6dd7bee46bc-kube-api-access-vsn5p\") pod \"metallb-operator-webhook-server-656bd465d-j8r6j\" (UID: \"a4436d8b-9043-4dfd-8f29-e6dd7bee46bc\") " pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.880531 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz"] Jan 20 04:01:27 crc kubenswrapper[4898]: W0120 04:01:27.891370 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca361fa9_3501_4e45_b43a_5344a65efac5.slice/crio-07002b70bd389b18a9a0d2510bd6bcce2a2e5b3a91171061718571675b6fc661 WatchSource:0}: Error finding container 07002b70bd389b18a9a0d2510bd6bcce2a2e5b3a91171061718571675b6fc661: Status 404 returned error can't find the container with id 07002b70bd389b18a9a0d2510bd6bcce2a2e5b3a91171061718571675b6fc661 Jan 20 04:01:27 crc kubenswrapper[4898]: I0120 04:01:27.908889 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:28 crc kubenswrapper[4898]: I0120 04:01:28.089887 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j"] Jan 20 04:01:28 crc kubenswrapper[4898]: I0120 04:01:28.763359 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" event={"ID":"ca361fa9-3501-4e45-b43a-5344a65efac5","Type":"ContainerStarted","Data":"07002b70bd389b18a9a0d2510bd6bcce2a2e5b3a91171061718571675b6fc661"} Jan 20 04:01:28 crc kubenswrapper[4898]: I0120 04:01:28.764832 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" event={"ID":"a4436d8b-9043-4dfd-8f29-e6dd7bee46bc","Type":"ContainerStarted","Data":"9b7a30f60c43176064ca754760a8e959cedfebdf196dd799dbc5e372da0f421e"} Jan 20 04:01:33 crc kubenswrapper[4898]: I0120 04:01:33.824229 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" event={"ID":"ca361fa9-3501-4e45-b43a-5344a65efac5","Type":"ContainerStarted","Data":"7c36e2dd7f1561ce453d84a5c29401acba205f8537a4671380b3f58ddb7a960e"} Jan 20 04:01:33 crc kubenswrapper[4898]: I0120 04:01:33.824701 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:01:33 crc kubenswrapper[4898]: I0120 04:01:33.825775 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" event={"ID":"a4436d8b-9043-4dfd-8f29-e6dd7bee46bc","Type":"ContainerStarted","Data":"46fc5ce76ae56128fdb10dba406bfd46f5e1c517f9ce548fc9816b6cb1ed0503"} Jan 20 04:01:33 crc kubenswrapper[4898]: I0120 04:01:33.826223 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:01:33 crc kubenswrapper[4898]: I0120 04:01:33.847446 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" podStartSLOduration=1.257382633 podStartE2EDuration="6.847415634s" podCreationTimestamp="2026-01-20 04:01:27 +0000 UTC" firstStartedPulling="2026-01-20 04:01:27.894654558 +0000 UTC m=+734.494442417" lastFinishedPulling="2026-01-20 04:01:33.484687559 +0000 UTC m=+740.084475418" observedRunningTime="2026-01-20 04:01:33.843741535 +0000 UTC m=+740.443529404" watchObservedRunningTime="2026-01-20 04:01:33.847415634 +0000 UTC m=+740.447203503" Jan 20 04:01:33 crc kubenswrapper[4898]: I0120 04:01:33.873774 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" podStartSLOduration=1.464813904 podStartE2EDuration="6.873756407s" podCreationTimestamp="2026-01-20 04:01:27 +0000 UTC" firstStartedPulling="2026-01-20 04:01:28.092451319 +0000 UTC m=+734.692239178" lastFinishedPulling="2026-01-20 04:01:33.501393822 +0000 UTC m=+740.101181681" observedRunningTime="2026-01-20 04:01:33.869822491 +0000 UTC m=+740.469610350" watchObservedRunningTime="2026-01-20 04:01:33.873756407 +0000 UTC m=+740.473544266" Jan 20 04:01:47 crc kubenswrapper[4898]: I0120 04:01:47.912694 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-656bd465d-j8r6j" Jan 20 04:02:04 crc kubenswrapper[4898]: I0120 04:02:04.399243 4898 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 04:02:07 crc kubenswrapper[4898]: I0120 04:02:07.597412 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7bfbbff6f9-rppqz" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.494390 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-f9q7w"] Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.498508 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks"] Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.498888 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.500548 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.504677 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-78qcm" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.505248 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.506533 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.507854 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.537417 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks"] Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.611292 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-27wmr"] Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.612579 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.619200 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-747zx" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.619404 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.619456 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.619535 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.626372 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-5zgx8"] Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.632699 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.637895 4898 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.643191 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-reloader\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.643251 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-frr-conf\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.643297 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-metrics\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.643331 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rltsk\" (UniqueName: \"kubernetes.io/projected/033a1a1e-99a3-4195-bd53-bf46c4e768b7-kube-api-access-rltsk\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.643375 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-frr-sockets\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.643410 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/033a1a1e-99a3-4195-bd53-bf46c4e768b7-frr-startup\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.643458 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/033a1a1e-99a3-4195-bd53-bf46c4e768b7-metrics-certs\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.643490 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7da4c82-f1c1-494d-8f99-ea71c542169e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-g87ks\" (UID: \"f7da4c82-f1c1-494d-8f99-ea71c542169e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.643536 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5z8k\" (UniqueName: \"kubernetes.io/projected/f7da4c82-f1c1-494d-8f99-ea71c542169e-kube-api-access-p5z8k\") pod \"frr-k8s-webhook-server-7df86c4f6c-g87ks\" (UID: \"f7da4c82-f1c1-494d-8f99-ea71c542169e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.653123 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-5zgx8"] Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745040 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/033a1a1e-99a3-4195-bd53-bf46c4e768b7-metrics-certs\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745140 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7da4c82-f1c1-494d-8f99-ea71c542169e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-g87ks\" (UID: \"f7da4c82-f1c1-494d-8f99-ea71c542169e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745203 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-memberlist\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745268 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5z8k\" (UniqueName: \"kubernetes.io/projected/f7da4c82-f1c1-494d-8f99-ea71c542169e-kube-api-access-p5z8k\") pod \"frr-k8s-webhook-server-7df86c4f6c-g87ks\" (UID: \"f7da4c82-f1c1-494d-8f99-ea71c542169e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" Jan 20 04:02:08 crc kubenswrapper[4898]: E0120 04:02:08.745310 4898 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745333 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-reloader\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745370 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-frr-conf\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: E0120 04:02:08.745383 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7da4c82-f1c1-494d-8f99-ea71c542169e-cert podName:f7da4c82-f1c1-494d-8f99-ea71c542169e nodeName:}" failed. No retries permitted until 2026-01-20 04:02:09.245364017 +0000 UTC m=+775.845151876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7da4c82-f1c1-494d-8f99-ea71c542169e-cert") pod "frr-k8s-webhook-server-7df86c4f6c-g87ks" (UID: "f7da4c82-f1c1-494d-8f99-ea71c542169e") : secret "frr-k8s-webhook-server-cert" not found Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745424 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-metrics\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745484 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzrxm\" (UniqueName: \"kubernetes.io/projected/beba60fc-d482-43a1-885b-b03a082a4e95-kube-api-access-xzrxm\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745532 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rltsk\" (UniqueName: \"kubernetes.io/projected/033a1a1e-99a3-4195-bd53-bf46c4e768b7-kube-api-access-rltsk\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745573 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5qx9\" (UniqueName: \"kubernetes.io/projected/1b647c19-565c-4041-980f-2455d029079c-kube-api-access-v5qx9\") pod \"controller-6968d8fdc4-5zgx8\" (UID: \"1b647c19-565c-4041-980f-2455d029079c\") " pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745630 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-frr-sockets\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745665 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/beba60fc-d482-43a1-885b-b03a082a4e95-metallb-excludel2\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745715 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b647c19-565c-4041-980f-2455d029079c-cert\") pod \"controller-6968d8fdc4-5zgx8\" (UID: \"1b647c19-565c-4041-980f-2455d029079c\") " pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745765 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-metrics-certs\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745829 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/033a1a1e-99a3-4195-bd53-bf46c4e768b7-frr-startup\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.745879 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b647c19-565c-4041-980f-2455d029079c-metrics-certs\") pod \"controller-6968d8fdc4-5zgx8\" (UID: \"1b647c19-565c-4041-980f-2455d029079c\") " pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.746095 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-metrics\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.746531 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-reloader\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.746528 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-frr-sockets\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.746725 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/033a1a1e-99a3-4195-bd53-bf46c4e768b7-frr-conf\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.748405 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/033a1a1e-99a3-4195-bd53-bf46c4e768b7-frr-startup\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.754936 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/033a1a1e-99a3-4195-bd53-bf46c4e768b7-metrics-certs\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.764078 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5z8k\" (UniqueName: \"kubernetes.io/projected/f7da4c82-f1c1-494d-8f99-ea71c542169e-kube-api-access-p5z8k\") pod \"frr-k8s-webhook-server-7df86c4f6c-g87ks\" (UID: \"f7da4c82-f1c1-494d-8f99-ea71c542169e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.767131 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rltsk\" (UniqueName: \"kubernetes.io/projected/033a1a1e-99a3-4195-bd53-bf46c4e768b7-kube-api-access-rltsk\") pod \"frr-k8s-f9q7w\" (UID: \"033a1a1e-99a3-4195-bd53-bf46c4e768b7\") " pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.833480 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.847738 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b647c19-565c-4041-980f-2455d029079c-metrics-certs\") pod \"controller-6968d8fdc4-5zgx8\" (UID: \"1b647c19-565c-4041-980f-2455d029079c\") " pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.847914 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-memberlist\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: E0120 04:02:08.848081 4898 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.848140 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzrxm\" (UniqueName: \"kubernetes.io/projected/beba60fc-d482-43a1-885b-b03a082a4e95-kube-api-access-xzrxm\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: E0120 04:02:08.848154 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-memberlist podName:beba60fc-d482-43a1-885b-b03a082a4e95 nodeName:}" failed. No retries permitted until 2026-01-20 04:02:09.348134148 +0000 UTC m=+775.947921997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-memberlist") pod "speaker-27wmr" (UID: "beba60fc-d482-43a1-885b-b03a082a4e95") : secret "metallb-memberlist" not found Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.848201 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5qx9\" (UniqueName: \"kubernetes.io/projected/1b647c19-565c-4041-980f-2455d029079c-kube-api-access-v5qx9\") pod \"controller-6968d8fdc4-5zgx8\" (UID: \"1b647c19-565c-4041-980f-2455d029079c\") " pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.848286 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/beba60fc-d482-43a1-885b-b03a082a4e95-metallb-excludel2\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.848345 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b647c19-565c-4041-980f-2455d029079c-cert\") pod \"controller-6968d8fdc4-5zgx8\" (UID: \"1b647c19-565c-4041-980f-2455d029079c\") " pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.848392 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-metrics-certs\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.850429 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/beba60fc-d482-43a1-885b-b03a082a4e95-metallb-excludel2\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.853335 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b647c19-565c-4041-980f-2455d029079c-cert\") pod \"controller-6968d8fdc4-5zgx8\" (UID: \"1b647c19-565c-4041-980f-2455d029079c\") " pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.853763 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-metrics-certs\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.857611 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b647c19-565c-4041-980f-2455d029079c-metrics-certs\") pod \"controller-6968d8fdc4-5zgx8\" (UID: \"1b647c19-565c-4041-980f-2455d029079c\") " pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.868914 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5qx9\" (UniqueName: \"kubernetes.io/projected/1b647c19-565c-4041-980f-2455d029079c-kube-api-access-v5qx9\") pod \"controller-6968d8fdc4-5zgx8\" (UID: \"1b647c19-565c-4041-980f-2455d029079c\") " pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.874061 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzrxm\" (UniqueName: \"kubernetes.io/projected/beba60fc-d482-43a1-885b-b03a082a4e95-kube-api-access-xzrxm\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:08 crc kubenswrapper[4898]: I0120 04:02:08.950885 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:09 crc kubenswrapper[4898]: I0120 04:02:09.255259 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7da4c82-f1c1-494d-8f99-ea71c542169e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-g87ks\" (UID: \"f7da4c82-f1c1-494d-8f99-ea71c542169e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" Jan 20 04:02:09 crc kubenswrapper[4898]: I0120 04:02:09.260944 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7da4c82-f1c1-494d-8f99-ea71c542169e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-g87ks\" (UID: \"f7da4c82-f1c1-494d-8f99-ea71c542169e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" Jan 20 04:02:09 crc kubenswrapper[4898]: I0120 04:02:09.267988 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-5zgx8"] Jan 20 04:02:09 crc kubenswrapper[4898]: W0120 04:02:09.274233 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b647c19_565c_4041_980f_2455d029079c.slice/crio-500c4ed3a952f6c86c2f81e137798af5f47816efdfe7cb123eef9ef0fdbc89d9 WatchSource:0}: Error finding container 500c4ed3a952f6c86c2f81e137798af5f47816efdfe7cb123eef9ef0fdbc89d9: Status 404 returned error can't find the container with id 500c4ed3a952f6c86c2f81e137798af5f47816efdfe7cb123eef9ef0fdbc89d9 Jan 20 04:02:09 crc kubenswrapper[4898]: I0120 04:02:09.357367 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-memberlist\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:09 crc kubenswrapper[4898]: E0120 04:02:09.357641 4898 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 20 04:02:09 crc kubenswrapper[4898]: E0120 04:02:09.357789 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-memberlist podName:beba60fc-d482-43a1-885b-b03a082a4e95 nodeName:}" failed. No retries permitted until 2026-01-20 04:02:10.357746913 +0000 UTC m=+776.957534772 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-memberlist") pod "speaker-27wmr" (UID: "beba60fc-d482-43a1-885b-b03a082a4e95") : secret "metallb-memberlist" not found Jan 20 04:02:09 crc kubenswrapper[4898]: I0120 04:02:09.446653 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" Jan 20 04:02:09 crc kubenswrapper[4898]: I0120 04:02:09.732586 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks"] Jan 20 04:02:09 crc kubenswrapper[4898]: I0120 04:02:09.976568 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:02:09 crc kubenswrapper[4898]: I0120 04:02:09.976632 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:02:10 crc kubenswrapper[4898]: I0120 04:02:10.054016 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5zgx8" event={"ID":"1b647c19-565c-4041-980f-2455d029079c","Type":"ContainerStarted","Data":"954d2edf10ff82a9e8556b66033f9726e5bc68207569d226ec30175d8d5817d2"} Jan 20 04:02:10 crc kubenswrapper[4898]: I0120 04:02:10.054085 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5zgx8" event={"ID":"1b647c19-565c-4041-980f-2455d029079c","Type":"ContainerStarted","Data":"d4f009ad79f8cac08e301ec7483b306a2f36be001ac55ef4c13e159d59fe50b3"} Jan 20 04:02:10 crc kubenswrapper[4898]: I0120 04:02:10.054096 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5zgx8" event={"ID":"1b647c19-565c-4041-980f-2455d029079c","Type":"ContainerStarted","Data":"500c4ed3a952f6c86c2f81e137798af5f47816efdfe7cb123eef9ef0fdbc89d9"} Jan 20 04:02:10 crc kubenswrapper[4898]: I0120 04:02:10.054142 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:10 crc kubenswrapper[4898]: I0120 04:02:10.055956 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f9q7w" event={"ID":"033a1a1e-99a3-4195-bd53-bf46c4e768b7","Type":"ContainerStarted","Data":"6714ef76f168138bbc823adb3cc4d93da8be1b8980e47f14e63a67add4e40416"} Jan 20 04:02:10 crc kubenswrapper[4898]: I0120 04:02:10.057483 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" event={"ID":"f7da4c82-f1c1-494d-8f99-ea71c542169e","Type":"ContainerStarted","Data":"a7283a996b6761c8d02c4014647293a51e198d441204c00086e156ba58c1fdc7"} Jan 20 04:02:10 crc kubenswrapper[4898]: I0120 04:02:10.075539 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-5zgx8" podStartSLOduration=2.075516952 podStartE2EDuration="2.075516952s" podCreationTimestamp="2026-01-20 04:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:02:10.073797317 +0000 UTC m=+776.673585196" watchObservedRunningTime="2026-01-20 04:02:10.075516952 +0000 UTC m=+776.675304811" Jan 20 04:02:10 crc kubenswrapper[4898]: I0120 04:02:10.404535 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-memberlist\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:10 crc kubenswrapper[4898]: I0120 04:02:10.415207 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/beba60fc-d482-43a1-885b-b03a082a4e95-memberlist\") pod \"speaker-27wmr\" (UID: \"beba60fc-d482-43a1-885b-b03a082a4e95\") " pod="metallb-system/speaker-27wmr" Jan 20 04:02:10 crc kubenswrapper[4898]: I0120 04:02:10.433748 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-27wmr" Jan 20 04:02:11 crc kubenswrapper[4898]: I0120 04:02:11.067137 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-27wmr" event={"ID":"beba60fc-d482-43a1-885b-b03a082a4e95","Type":"ContainerStarted","Data":"9ffe921ed16ad513c4fdeedc5bc15fb0ecdf60b097027ee7bed35b0fefea3c19"} Jan 20 04:02:11 crc kubenswrapper[4898]: I0120 04:02:11.068208 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-27wmr" event={"ID":"beba60fc-d482-43a1-885b-b03a082a4e95","Type":"ContainerStarted","Data":"3444abc065b4c88ad36b3053f804b79f4d80d4f508e8542f2fc736ae27e4a008"} Jan 20 04:02:12 crc kubenswrapper[4898]: I0120 04:02:12.075715 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-27wmr" event={"ID":"beba60fc-d482-43a1-885b-b03a082a4e95","Type":"ContainerStarted","Data":"0f0a048f5c09e3dbf832ee37f604e2a9d6932291b1b5be7f24b9638e2c239e3b"} Jan 20 04:02:12 crc kubenswrapper[4898]: I0120 04:02:12.076074 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-27wmr" Jan 20 04:02:12 crc kubenswrapper[4898]: I0120 04:02:12.097489 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-27wmr" podStartSLOduration=4.097470306 podStartE2EDuration="4.097470306s" podCreationTimestamp="2026-01-20 04:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:02:12.096294089 +0000 UTC m=+778.696081948" watchObservedRunningTime="2026-01-20 04:02:12.097470306 +0000 UTC m=+778.697258165" Jan 20 04:02:16 crc kubenswrapper[4898]: I0120 04:02:16.101645 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" event={"ID":"f7da4c82-f1c1-494d-8f99-ea71c542169e","Type":"ContainerStarted","Data":"34d078ae6e9fdb2a38cafa8c2bc34bdc1b9b84e4b6e2446cc10d63d332a05b60"} Jan 20 04:02:16 crc kubenswrapper[4898]: I0120 04:02:16.102187 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" Jan 20 04:02:16 crc kubenswrapper[4898]: I0120 04:02:16.103417 4898 generic.go:334] "Generic (PLEG): container finished" podID="033a1a1e-99a3-4195-bd53-bf46c4e768b7" containerID="52084df7ac5ae2f8a33a72358bf1a77fe9399d4838834a32f3777698c577991b" exitCode=0 Jan 20 04:02:16 crc kubenswrapper[4898]: I0120 04:02:16.103595 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f9q7w" event={"ID":"033a1a1e-99a3-4195-bd53-bf46c4e768b7","Type":"ContainerDied","Data":"52084df7ac5ae2f8a33a72358bf1a77fe9399d4838834a32f3777698c577991b"} Jan 20 04:02:16 crc kubenswrapper[4898]: I0120 04:02:16.119573 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" podStartSLOduration=2.021394664 podStartE2EDuration="8.119552193s" podCreationTimestamp="2026-01-20 04:02:08 +0000 UTC" firstStartedPulling="2026-01-20 04:02:09.73598218 +0000 UTC m=+776.335770049" lastFinishedPulling="2026-01-20 04:02:15.834139719 +0000 UTC m=+782.433927578" observedRunningTime="2026-01-20 04:02:16.114141821 +0000 UTC m=+782.713929690" watchObservedRunningTime="2026-01-20 04:02:16.119552193 +0000 UTC m=+782.719340052" Jan 20 04:02:17 crc kubenswrapper[4898]: I0120 04:02:17.110825 4898 generic.go:334] "Generic (PLEG): container finished" podID="033a1a1e-99a3-4195-bd53-bf46c4e768b7" containerID="d843a4b062fc36e2c643bb2f6b954c3d6410a979b0a67ddbb5994c02d3cc8ec5" exitCode=0 Jan 20 04:02:17 crc kubenswrapper[4898]: I0120 04:02:17.110872 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f9q7w" event={"ID":"033a1a1e-99a3-4195-bd53-bf46c4e768b7","Type":"ContainerDied","Data":"d843a4b062fc36e2c643bb2f6b954c3d6410a979b0a67ddbb5994c02d3cc8ec5"} Jan 20 04:02:18 crc kubenswrapper[4898]: I0120 04:02:18.119997 4898 generic.go:334] "Generic (PLEG): container finished" podID="033a1a1e-99a3-4195-bd53-bf46c4e768b7" containerID="a870954301a6fa04e8c0a16770aba3e4c5197619fbc38c7ea58d494f47b11eb1" exitCode=0 Jan 20 04:02:18 crc kubenswrapper[4898]: I0120 04:02:18.120353 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f9q7w" event={"ID":"033a1a1e-99a3-4195-bd53-bf46c4e768b7","Type":"ContainerDied","Data":"a870954301a6fa04e8c0a16770aba3e4c5197619fbc38c7ea58d494f47b11eb1"} Jan 20 04:02:19 crc kubenswrapper[4898]: I0120 04:02:19.186243 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f9q7w" event={"ID":"033a1a1e-99a3-4195-bd53-bf46c4e768b7","Type":"ContainerStarted","Data":"c57b130ecdd0759275e8e31684e25d47543592743822bb5817cffb2733583d56"} Jan 20 04:02:19 crc kubenswrapper[4898]: I0120 04:02:19.187716 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f9q7w" event={"ID":"033a1a1e-99a3-4195-bd53-bf46c4e768b7","Type":"ContainerStarted","Data":"379bf3d8354af2e701d42d60d555eab6f6725ab01a8cd9897c4760ad18412820"} Jan 20 04:02:19 crc kubenswrapper[4898]: I0120 04:02:19.187734 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f9q7w" event={"ID":"033a1a1e-99a3-4195-bd53-bf46c4e768b7","Type":"ContainerStarted","Data":"13370a0772aacce102e3358c5cfcca764a8ab9469f6dbdd7cb21e282765143f0"} Jan 20 04:02:19 crc kubenswrapper[4898]: I0120 04:02:19.187743 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f9q7w" event={"ID":"033a1a1e-99a3-4195-bd53-bf46c4e768b7","Type":"ContainerStarted","Data":"98ddf1f5946bb4551b132a91a49132c64283c4332bcee985af67668c92cb89a8"} Jan 20 04:02:19 crc kubenswrapper[4898]: I0120 04:02:19.187759 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f9q7w" event={"ID":"033a1a1e-99a3-4195-bd53-bf46c4e768b7","Type":"ContainerStarted","Data":"53b169c38cead917fb61e15642c951304140c7731526fc8fb1ca12d1752073ac"} Jan 20 04:02:20 crc kubenswrapper[4898]: I0120 04:02:20.201480 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f9q7w" event={"ID":"033a1a1e-99a3-4195-bd53-bf46c4e768b7","Type":"ContainerStarted","Data":"adf717519aa4d955f150257d4959220c26a24368eedc705839daa098f5447edd"} Jan 20 04:02:20 crc kubenswrapper[4898]: I0120 04:02:20.201906 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:20 crc kubenswrapper[4898]: I0120 04:02:20.230029 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-f9q7w" podStartSLOduration=5.470248099 podStartE2EDuration="12.23000417s" podCreationTimestamp="2026-01-20 04:02:08 +0000 UTC" firstStartedPulling="2026-01-20 04:02:09.058447943 +0000 UTC m=+775.658235812" lastFinishedPulling="2026-01-20 04:02:15.818204024 +0000 UTC m=+782.417991883" observedRunningTime="2026-01-20 04:02:20.22495648 +0000 UTC m=+786.824744359" watchObservedRunningTime="2026-01-20 04:02:20.23000417 +0000 UTC m=+786.829792069" Jan 20 04:02:20 crc kubenswrapper[4898]: I0120 04:02:20.438185 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-27wmr" Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.630234 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jhsw7"] Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.632608 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jhsw7" Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.635372 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cgt49" Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.636716 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.636728 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.699270 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jhsw7"] Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.716201 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjs8z\" (UniqueName: \"kubernetes.io/projected/fad68645-8121-4365-8369-8f0afba0948b-kube-api-access-wjs8z\") pod \"openstack-operator-index-jhsw7\" (UID: \"fad68645-8121-4365-8369-8f0afba0948b\") " pod="openstack-operators/openstack-operator-index-jhsw7" Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.817686 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjs8z\" (UniqueName: \"kubernetes.io/projected/fad68645-8121-4365-8369-8f0afba0948b-kube-api-access-wjs8z\") pod \"openstack-operator-index-jhsw7\" (UID: \"fad68645-8121-4365-8369-8f0afba0948b\") " pod="openstack-operators/openstack-operator-index-jhsw7" Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.834264 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.841536 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjs8z\" (UniqueName: \"kubernetes.io/projected/fad68645-8121-4365-8369-8f0afba0948b-kube-api-access-wjs8z\") pod \"openstack-operator-index-jhsw7\" (UID: \"fad68645-8121-4365-8369-8f0afba0948b\") " pod="openstack-operators/openstack-operator-index-jhsw7" Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.876761 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:23 crc kubenswrapper[4898]: I0120 04:02:23.957235 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jhsw7" Jan 20 04:02:24 crc kubenswrapper[4898]: I0120 04:02:24.484860 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jhsw7"] Jan 20 04:02:24 crc kubenswrapper[4898]: W0120 04:02:24.490607 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfad68645_8121_4365_8369_8f0afba0948b.slice/crio-1e40de012b3090b35c49a44f60604a85c10119fbf3b8fb045b2b1c90a6d6f5f2 WatchSource:0}: Error finding container 1e40de012b3090b35c49a44f60604a85c10119fbf3b8fb045b2b1c90a6d6f5f2: Status 404 returned error can't find the container with id 1e40de012b3090b35c49a44f60604a85c10119fbf3b8fb045b2b1c90a6d6f5f2 Jan 20 04:02:25 crc kubenswrapper[4898]: I0120 04:02:25.241611 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jhsw7" event={"ID":"fad68645-8121-4365-8369-8f0afba0948b","Type":"ContainerStarted","Data":"1e40de012b3090b35c49a44f60604a85c10119fbf3b8fb045b2b1c90a6d6f5f2"} Jan 20 04:02:26 crc kubenswrapper[4898]: I0120 04:02:26.973351 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jhsw7"] Jan 20 04:02:27 crc kubenswrapper[4898]: I0120 04:02:27.589616 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8fbkv"] Jan 20 04:02:27 crc kubenswrapper[4898]: I0120 04:02:27.591729 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8fbkv" Jan 20 04:02:27 crc kubenswrapper[4898]: I0120 04:02:27.601648 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8fbkv"] Jan 20 04:02:27 crc kubenswrapper[4898]: I0120 04:02:27.683213 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cmbv\" (UniqueName: \"kubernetes.io/projected/d7a9d525-aa60-4aa7-b12b-28c27c8fa591-kube-api-access-7cmbv\") pod \"openstack-operator-index-8fbkv\" (UID: \"d7a9d525-aa60-4aa7-b12b-28c27c8fa591\") " pod="openstack-operators/openstack-operator-index-8fbkv" Jan 20 04:02:27 crc kubenswrapper[4898]: I0120 04:02:27.785054 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cmbv\" (UniqueName: \"kubernetes.io/projected/d7a9d525-aa60-4aa7-b12b-28c27c8fa591-kube-api-access-7cmbv\") pod \"openstack-operator-index-8fbkv\" (UID: \"d7a9d525-aa60-4aa7-b12b-28c27c8fa591\") " pod="openstack-operators/openstack-operator-index-8fbkv" Jan 20 04:02:27 crc kubenswrapper[4898]: I0120 04:02:27.812858 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cmbv\" (UniqueName: \"kubernetes.io/projected/d7a9d525-aa60-4aa7-b12b-28c27c8fa591-kube-api-access-7cmbv\") pod \"openstack-operator-index-8fbkv\" (UID: \"d7a9d525-aa60-4aa7-b12b-28c27c8fa591\") " pod="openstack-operators/openstack-operator-index-8fbkv" Jan 20 04:02:27 crc kubenswrapper[4898]: I0120 04:02:27.949405 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8fbkv" Jan 20 04:02:28 crc kubenswrapper[4898]: I0120 04:02:28.260211 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jhsw7" event={"ID":"fad68645-8121-4365-8369-8f0afba0948b","Type":"ContainerStarted","Data":"925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655"} Jan 20 04:02:28 crc kubenswrapper[4898]: I0120 04:02:28.260834 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-jhsw7" podUID="fad68645-8121-4365-8369-8f0afba0948b" containerName="registry-server" containerID="cri-o://925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655" gracePeriod=2 Jan 20 04:02:28 crc kubenswrapper[4898]: I0120 04:02:28.287906 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jhsw7" podStartSLOduration=2.610253357 podStartE2EDuration="5.287880474s" podCreationTimestamp="2026-01-20 04:02:23 +0000 UTC" firstStartedPulling="2026-01-20 04:02:24.493715839 +0000 UTC m=+791.093503698" lastFinishedPulling="2026-01-20 04:02:27.171342956 +0000 UTC m=+793.771130815" observedRunningTime="2026-01-20 04:02:28.279900411 +0000 UTC m=+794.879688280" watchObservedRunningTime="2026-01-20 04:02:28.287880474 +0000 UTC m=+794.887668343" Jan 20 04:02:28 crc kubenswrapper[4898]: I0120 04:02:28.416836 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8fbkv"] Jan 20 04:02:28 crc kubenswrapper[4898]: I0120 04:02:28.674299 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jhsw7" Jan 20 04:02:28 crc kubenswrapper[4898]: I0120 04:02:28.798596 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjs8z\" (UniqueName: \"kubernetes.io/projected/fad68645-8121-4365-8369-8f0afba0948b-kube-api-access-wjs8z\") pod \"fad68645-8121-4365-8369-8f0afba0948b\" (UID: \"fad68645-8121-4365-8369-8f0afba0948b\") " Jan 20 04:02:28 crc kubenswrapper[4898]: I0120 04:02:28.806515 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad68645-8121-4365-8369-8f0afba0948b-kube-api-access-wjs8z" (OuterVolumeSpecName: "kube-api-access-wjs8z") pod "fad68645-8121-4365-8369-8f0afba0948b" (UID: "fad68645-8121-4365-8369-8f0afba0948b"). InnerVolumeSpecName "kube-api-access-wjs8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:02:28 crc kubenswrapper[4898]: I0120 04:02:28.837636 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-f9q7w" Jan 20 04:02:28 crc kubenswrapper[4898]: I0120 04:02:28.901277 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjs8z\" (UniqueName: \"kubernetes.io/projected/fad68645-8121-4365-8369-8f0afba0948b-kube-api-access-wjs8z\") on node \"crc\" DevicePath \"\"" Jan 20 04:02:28 crc kubenswrapper[4898]: I0120 04:02:28.956682 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-5zgx8" Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.271766 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8fbkv" event={"ID":"d7a9d525-aa60-4aa7-b12b-28c27c8fa591","Type":"ContainerStarted","Data":"8bc2de1c25fbd85b14e8ce142b6e50fd15e314fc3472310d8e75edaeb60c883a"} Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.271845 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8fbkv" event={"ID":"d7a9d525-aa60-4aa7-b12b-28c27c8fa591","Type":"ContainerStarted","Data":"8272678e8c670d08a2af9bf564e182cbbea27e03e59a4646df1e0549bc1f9ecf"} Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.274853 4898 generic.go:334] "Generic (PLEG): container finished" podID="fad68645-8121-4365-8369-8f0afba0948b" containerID="925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655" exitCode=0 Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.274903 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jhsw7" Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.274919 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jhsw7" event={"ID":"fad68645-8121-4365-8369-8f0afba0948b","Type":"ContainerDied","Data":"925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655"} Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.274971 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jhsw7" event={"ID":"fad68645-8121-4365-8369-8f0afba0948b","Type":"ContainerDied","Data":"1e40de012b3090b35c49a44f60604a85c10119fbf3b8fb045b2b1c90a6d6f5f2"} Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.275001 4898 scope.go:117] "RemoveContainer" containerID="925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655" Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.308806 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8fbkv" podStartSLOduration=2.260290479 podStartE2EDuration="2.308757617s" podCreationTimestamp="2026-01-20 04:02:27 +0000 UTC" firstStartedPulling="2026-01-20 04:02:28.435558378 +0000 UTC m=+795.035346247" lastFinishedPulling="2026-01-20 04:02:28.484025516 +0000 UTC m=+795.083813385" observedRunningTime="2026-01-20 04:02:29.300931539 +0000 UTC m=+795.900719458" watchObservedRunningTime="2026-01-20 04:02:29.308757617 +0000 UTC m=+795.908545646" Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.317474 4898 scope.go:117] "RemoveContainer" containerID="925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655" Jan 20 04:02:29 crc kubenswrapper[4898]: E0120 04:02:29.318774 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655\": container with ID starting with 925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655 not found: ID does not exist" containerID="925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655" Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.318831 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655"} err="failed to get container status \"925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655\": rpc error: code = NotFound desc = could not find container \"925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655\": container with ID starting with 925ae69622dc0ed5e1f2050aaa9f5b143258657a507f941a05e4597795ebf655 not found: ID does not exist" Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.340934 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jhsw7"] Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.349543 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-jhsw7"] Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.453654 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g87ks" Jan 20 04:02:29 crc kubenswrapper[4898]: I0120 04:02:29.728892 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad68645-8121-4365-8369-8f0afba0948b" path="/var/lib/kubelet/pods/fad68645-8121-4365-8369-8f0afba0948b/volumes" Jan 20 04:02:37 crc kubenswrapper[4898]: I0120 04:02:37.949526 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8fbkv" Jan 20 04:02:37 crc kubenswrapper[4898]: I0120 04:02:37.950266 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8fbkv" Jan 20 04:02:37 crc kubenswrapper[4898]: I0120 04:02:37.994859 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8fbkv" Jan 20 04:02:38 crc kubenswrapper[4898]: I0120 04:02:38.411190 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8fbkv" Jan 20 04:02:39 crc kubenswrapper[4898]: I0120 04:02:39.975820 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:02:39 crc kubenswrapper[4898]: I0120 04:02:39.976167 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.065916 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp"] Jan 20 04:02:45 crc kubenswrapper[4898]: E0120 04:02:45.066955 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad68645-8121-4365-8369-8f0afba0948b" containerName="registry-server" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.066977 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad68645-8121-4365-8369-8f0afba0948b" containerName="registry-server" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.067236 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad68645-8121-4365-8369-8f0afba0948b" containerName="registry-server" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.069723 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.072926 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-lmdsb" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.075699 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp"] Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.158971 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npfjr\" (UniqueName: \"kubernetes.io/projected/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-kube-api-access-npfjr\") pod \"92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.159039 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-bundle\") pod \"92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.159067 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-util\") pod \"92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.259873 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-util\") pod \"92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.260194 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npfjr\" (UniqueName: \"kubernetes.io/projected/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-kube-api-access-npfjr\") pod \"92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.260298 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-bundle\") pod \"92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.260689 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-util\") pod \"92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.260831 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-bundle\") pod \"92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.282309 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npfjr\" (UniqueName: \"kubernetes.io/projected/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-kube-api-access-npfjr\") pod \"92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.434155 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:45 crc kubenswrapper[4898]: I0120 04:02:45.847520 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp"] Jan 20 04:02:45 crc kubenswrapper[4898]: W0120 04:02:45.859678 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3850c7f3_e99b_4c2b_b0cd_bfc05057051a.slice/crio-09c8e8775132ab769e3e75da166210ef512d7fb152ec9b6d1362a88969fb3e0a WatchSource:0}: Error finding container 09c8e8775132ab769e3e75da166210ef512d7fb152ec9b6d1362a88969fb3e0a: Status 404 returned error can't find the container with id 09c8e8775132ab769e3e75da166210ef512d7fb152ec9b6d1362a88969fb3e0a Jan 20 04:02:46 crc kubenswrapper[4898]: I0120 04:02:46.432018 4898 generic.go:334] "Generic (PLEG): container finished" podID="3850c7f3-e99b-4c2b-b0cd-bfc05057051a" containerID="bc646486eedc402ce15d65702469248521df763485a1de485fb4589198b53a11" exitCode=0 Jan 20 04:02:46 crc kubenswrapper[4898]: I0120 04:02:46.432118 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" event={"ID":"3850c7f3-e99b-4c2b-b0cd-bfc05057051a","Type":"ContainerDied","Data":"bc646486eedc402ce15d65702469248521df763485a1de485fb4589198b53a11"} Jan 20 04:02:46 crc kubenswrapper[4898]: I0120 04:02:46.432517 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" event={"ID":"3850c7f3-e99b-4c2b-b0cd-bfc05057051a","Type":"ContainerStarted","Data":"09c8e8775132ab769e3e75da166210ef512d7fb152ec9b6d1362a88969fb3e0a"} Jan 20 04:02:47 crc kubenswrapper[4898]: I0120 04:02:47.444178 4898 generic.go:334] "Generic (PLEG): container finished" podID="3850c7f3-e99b-4c2b-b0cd-bfc05057051a" containerID="6d7d2929b7ddd772e33c17d4e03a1a1312406f9fd0ef54b7f0ca42ee92bb624a" exitCode=0 Jan 20 04:02:47 crc kubenswrapper[4898]: I0120 04:02:47.444315 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" event={"ID":"3850c7f3-e99b-4c2b-b0cd-bfc05057051a","Type":"ContainerDied","Data":"6d7d2929b7ddd772e33c17d4e03a1a1312406f9fd0ef54b7f0ca42ee92bb624a"} Jan 20 04:02:48 crc kubenswrapper[4898]: I0120 04:02:48.453723 4898 generic.go:334] "Generic (PLEG): container finished" podID="3850c7f3-e99b-4c2b-b0cd-bfc05057051a" containerID="e092a0acfbb6fd204bf79c8397e5b681852bf6c5b7cb1d63ee6fef5199d39b43" exitCode=0 Jan 20 04:02:48 crc kubenswrapper[4898]: I0120 04:02:48.453825 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" event={"ID":"3850c7f3-e99b-4c2b-b0cd-bfc05057051a","Type":"ContainerDied","Data":"e092a0acfbb6fd204bf79c8397e5b681852bf6c5b7cb1d63ee6fef5199d39b43"} Jan 20 04:02:49 crc kubenswrapper[4898]: I0120 04:02:49.785947 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:49 crc kubenswrapper[4898]: I0120 04:02:49.837244 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npfjr\" (UniqueName: \"kubernetes.io/projected/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-kube-api-access-npfjr\") pod \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " Jan 20 04:02:49 crc kubenswrapper[4898]: I0120 04:02:49.837297 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-util\") pod \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " Jan 20 04:02:49 crc kubenswrapper[4898]: I0120 04:02:49.838672 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-bundle\") pod \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\" (UID: \"3850c7f3-e99b-4c2b-b0cd-bfc05057051a\") " Jan 20 04:02:49 crc kubenswrapper[4898]: I0120 04:02:49.840335 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-bundle" (OuterVolumeSpecName: "bundle") pod "3850c7f3-e99b-4c2b-b0cd-bfc05057051a" (UID: "3850c7f3-e99b-4c2b-b0cd-bfc05057051a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:02:49 crc kubenswrapper[4898]: I0120 04:02:49.850649 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-kube-api-access-npfjr" (OuterVolumeSpecName: "kube-api-access-npfjr") pod "3850c7f3-e99b-4c2b-b0cd-bfc05057051a" (UID: "3850c7f3-e99b-4c2b-b0cd-bfc05057051a"). InnerVolumeSpecName "kube-api-access-npfjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:02:49 crc kubenswrapper[4898]: I0120 04:02:49.857826 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-util" (OuterVolumeSpecName: "util") pod "3850c7f3-e99b-4c2b-b0cd-bfc05057051a" (UID: "3850c7f3-e99b-4c2b-b0cd-bfc05057051a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:02:49 crc kubenswrapper[4898]: I0120 04:02:49.941175 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npfjr\" (UniqueName: \"kubernetes.io/projected/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-kube-api-access-npfjr\") on node \"crc\" DevicePath \"\"" Jan 20 04:02:49 crc kubenswrapper[4898]: I0120 04:02:49.941229 4898 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-util\") on node \"crc\" DevicePath \"\"" Jan 20 04:02:49 crc kubenswrapper[4898]: I0120 04:02:49.941251 4898 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3850c7f3-e99b-4c2b-b0cd-bfc05057051a-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:02:50 crc kubenswrapper[4898]: I0120 04:02:50.475294 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" event={"ID":"3850c7f3-e99b-4c2b-b0cd-bfc05057051a","Type":"ContainerDied","Data":"09c8e8775132ab769e3e75da166210ef512d7fb152ec9b6d1362a88969fb3e0a"} Jan 20 04:02:50 crc kubenswrapper[4898]: I0120 04:02:50.475769 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09c8e8775132ab769e3e75da166210ef512d7fb152ec9b6d1362a88969fb3e0a" Jan 20 04:02:50 crc kubenswrapper[4898]: I0120 04:02:50.475369 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp" Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.547505 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w"] Jan 20 04:02:52 crc kubenswrapper[4898]: E0120 04:02:52.548062 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3850c7f3-e99b-4c2b-b0cd-bfc05057051a" containerName="pull" Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.548079 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="3850c7f3-e99b-4c2b-b0cd-bfc05057051a" containerName="pull" Jan 20 04:02:52 crc kubenswrapper[4898]: E0120 04:02:52.548097 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3850c7f3-e99b-4c2b-b0cd-bfc05057051a" containerName="util" Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.548105 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="3850c7f3-e99b-4c2b-b0cd-bfc05057051a" containerName="util" Jan 20 04:02:52 crc kubenswrapper[4898]: E0120 04:02:52.548122 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3850c7f3-e99b-4c2b-b0cd-bfc05057051a" containerName="extract" Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.548130 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="3850c7f3-e99b-4c2b-b0cd-bfc05057051a" containerName="extract" Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.548267 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="3850c7f3-e99b-4c2b-b0cd-bfc05057051a" containerName="extract" Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.548786 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w" Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.553099 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-kzjjr" Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.576571 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w"] Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.680976 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bkxq\" (UniqueName: \"kubernetes.io/projected/037e77e5-fdab-45b4-9b61-a24279e2b615-kube-api-access-5bkxq\") pod \"openstack-operator-controller-init-84587498f9-g7d2w\" (UID: \"037e77e5-fdab-45b4-9b61-a24279e2b615\") " pod="openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w" Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.782346 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bkxq\" (UniqueName: \"kubernetes.io/projected/037e77e5-fdab-45b4-9b61-a24279e2b615-kube-api-access-5bkxq\") pod \"openstack-operator-controller-init-84587498f9-g7d2w\" (UID: \"037e77e5-fdab-45b4-9b61-a24279e2b615\") " pod="openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w" Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.801008 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bkxq\" (UniqueName: \"kubernetes.io/projected/037e77e5-fdab-45b4-9b61-a24279e2b615-kube-api-access-5bkxq\") pod \"openstack-operator-controller-init-84587498f9-g7d2w\" (UID: \"037e77e5-fdab-45b4-9b61-a24279e2b615\") " pod="openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w" Jan 20 04:02:52 crc kubenswrapper[4898]: I0120 04:02:52.869788 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w" Jan 20 04:02:53 crc kubenswrapper[4898]: I0120 04:02:53.176681 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w"] Jan 20 04:02:53 crc kubenswrapper[4898]: I0120 04:02:53.501693 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w" event={"ID":"037e77e5-fdab-45b4-9b61-a24279e2b615","Type":"ContainerStarted","Data":"be52a8013351af1d94262a6eaa4ace7daca9c282e7b9554c9336a5a519e284ad"} Jan 20 04:02:57 crc kubenswrapper[4898]: I0120 04:02:57.546998 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w" event={"ID":"037e77e5-fdab-45b4-9b61-a24279e2b615","Type":"ContainerStarted","Data":"ff34961ab48e95f97476f5926664558c3e6bf909988cdb99caac86d16861c2e1"} Jan 20 04:02:57 crc kubenswrapper[4898]: I0120 04:02:57.547770 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w" Jan 20 04:02:57 crc kubenswrapper[4898]: I0120 04:02:57.571301 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w" podStartSLOduration=1.591558602 podStartE2EDuration="5.571282832s" podCreationTimestamp="2026-01-20 04:02:52 +0000 UTC" firstStartedPulling="2026-01-20 04:02:53.189807468 +0000 UTC m=+819.789595327" lastFinishedPulling="2026-01-20 04:02:57.169531698 +0000 UTC m=+823.769319557" observedRunningTime="2026-01-20 04:02:57.569893258 +0000 UTC m=+824.169681137" watchObservedRunningTime="2026-01-20 04:02:57.571282832 +0000 UTC m=+824.171070701" Jan 20 04:03:02 crc kubenswrapper[4898]: I0120 04:03:02.873992 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-84587498f9-g7d2w" Jan 20 04:03:09 crc kubenswrapper[4898]: I0120 04:03:09.976070 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:03:09 crc kubenswrapper[4898]: I0120 04:03:09.976660 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:03:09 crc kubenswrapper[4898]: I0120 04:03:09.976746 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 04:03:09 crc kubenswrapper[4898]: I0120 04:03:09.977456 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ba5e3101afcfdc26fb12699a1870157681fb7b78baca1e12fdaf156b52381e1"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 04:03:09 crc kubenswrapper[4898]: I0120 04:03:09.977523 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://4ba5e3101afcfdc26fb12699a1870157681fb7b78baca1e12fdaf156b52381e1" gracePeriod=600 Jan 20 04:03:10 crc kubenswrapper[4898]: I0120 04:03:10.640031 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="4ba5e3101afcfdc26fb12699a1870157681fb7b78baca1e12fdaf156b52381e1" exitCode=0 Jan 20 04:03:10 crc kubenswrapper[4898]: I0120 04:03:10.640081 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"4ba5e3101afcfdc26fb12699a1870157681fb7b78baca1e12fdaf156b52381e1"} Jan 20 04:03:10 crc kubenswrapper[4898]: I0120 04:03:10.640483 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"e32845009c9a4455856ea9d28c879c6a89bdd3e67ca3cb7f9d9c98a468886eaa"} Jan 20 04:03:10 crc kubenswrapper[4898]: I0120 04:03:10.640511 4898 scope.go:117] "RemoveContainer" containerID="44d3f9b5ec84966828017ff7ecf7fddfe0954e43f58c20d5a24c6fd4e2708924" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.056251 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.057616 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.060057 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.060809 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.061196 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-pb4nt" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.069261 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.070064 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.071191 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-5zcht" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.072040 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-hbk44" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.076379 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.079873 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.083087 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.083809 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.086799 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-qnq2d" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.094679 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.095452 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.097094 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-rkhk6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.103380 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.116183 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.117969 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq4wz\" (UniqueName: \"kubernetes.io/projected/25439b6f-5a2a-4577-afd8-787d44877848-kube-api-access-wq4wz\") pod \"barbican-operator-controller-manager-7ddb5c749-rcbnl\" (UID: \"25439b6f-5a2a-4577-afd8-787d44877848\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.118052 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9j9h\" (UniqueName: \"kubernetes.io/projected/0389b651-5be7-45a2-bba7-d204285978e7-kube-api-access-r9j9h\") pod \"designate-operator-controller-manager-9f958b845-tnrj8\" (UID: \"0389b651-5be7-45a2-bba7-d204285978e7\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.118075 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8mdb\" (UniqueName: \"kubernetes.io/projected/765cfe55-b774-4345-8884-ca22330cf340-kube-api-access-j8mdb\") pod \"cinder-operator-controller-manager-9b68f5989-lvhpk\" (UID: \"765cfe55-b774-4345-8884-ca22330cf340\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.137637 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.138360 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.139649 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dgf9w" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.154730 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.155412 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.164300 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-fjgp2" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.164386 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.165137 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.175469 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pzqgs" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.175997 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.189204 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.228257 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26gv\" (UniqueName: \"kubernetes.io/projected/0b29f776-9321-4967-bf9b-6fe35cf6c195-kube-api-access-x26gv\") pod \"glance-operator-controller-manager-c6994669c-mzmtp\" (UID: \"0b29f776-9321-4967-bf9b-6fe35cf6c195\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.228302 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9j9h\" (UniqueName: \"kubernetes.io/projected/0389b651-5be7-45a2-bba7-d204285978e7-kube-api-access-r9j9h\") pod \"designate-operator-controller-manager-9f958b845-tnrj8\" (UID: \"0389b651-5be7-45a2-bba7-d204285978e7\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.228323 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8mdb\" (UniqueName: \"kubernetes.io/projected/765cfe55-b774-4345-8884-ca22330cf340-kube-api-access-j8mdb\") pod \"cinder-operator-controller-manager-9b68f5989-lvhpk\" (UID: \"765cfe55-b774-4345-8884-ca22330cf340\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.228346 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8r7x\" (UniqueName: \"kubernetes.io/projected/45d870c0-6af6-4cb0-9704-2ffafb2c423c-kube-api-access-w8r7x\") pod \"ironic-operator-controller-manager-78757b4889-7wx4b\" (UID: \"45d870c0-6af6-4cb0-9704-2ffafb2c423c\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.228388 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg75r\" (UniqueName: \"kubernetes.io/projected/d5cf00c9-700f-4f7b-98e1-626fdc638e32-kube-api-access-xg75r\") pod \"heat-operator-controller-manager-5d87976b78-jpmbw\" (UID: \"d5cf00c9-700f-4f7b-98e1-626fdc638e32\") " pod="openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.228412 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq4wz\" (UniqueName: \"kubernetes.io/projected/25439b6f-5a2a-4577-afd8-787d44877848-kube-api-access-wq4wz\") pod \"barbican-operator-controller-manager-7ddb5c749-rcbnl\" (UID: \"25439b6f-5a2a-4577-afd8-787d44877848\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.228456 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxtc\" (UniqueName: \"kubernetes.io/projected/c88cac18-f08f-4ad2-8bf2-21d27972223a-kube-api-access-cwxtc\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.228473 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.228500 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j8bw\" (UniqueName: \"kubernetes.io/projected/1458adb7-c7bd-4a1d-8d55-1f9b9cd98e14-kube-api-access-6j8bw\") pod \"horizon-operator-controller-manager-77d5c5b54f-vhlsd\" (UID: \"1458adb7-c7bd-4a1d-8d55-1f9b9cd98e14\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.229307 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.238529 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.238586 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.245562 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.251366 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-tqkpp" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.258610 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.259497 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.263963 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nf9rg" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.269056 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8mdb\" (UniqueName: \"kubernetes.io/projected/765cfe55-b774-4345-8884-ca22330cf340-kube-api-access-j8mdb\") pod \"cinder-operator-controller-manager-9b68f5989-lvhpk\" (UID: \"765cfe55-b774-4345-8884-ca22330cf340\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.272022 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.283145 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.285016 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq4wz\" (UniqueName: \"kubernetes.io/projected/25439b6f-5a2a-4577-afd8-787d44877848-kube-api-access-wq4wz\") pod \"barbican-operator-controller-manager-7ddb5c749-rcbnl\" (UID: \"25439b6f-5a2a-4577-afd8-787d44877848\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.286056 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9j9h\" (UniqueName: \"kubernetes.io/projected/0389b651-5be7-45a2-bba7-d204285978e7-kube-api-access-r9j9h\") pod \"designate-operator-controller-manager-9f958b845-tnrj8\" (UID: \"0389b651-5be7-45a2-bba7-d204285978e7\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.291511 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.292272 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.294520 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-m9fm4" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.320234 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.323647 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.330342 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxtc\" (UniqueName: \"kubernetes.io/projected/c88cac18-f08f-4ad2-8bf2-21d27972223a-kube-api-access-cwxtc\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.330377 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.330401 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4kg9\" (UniqueName: \"kubernetes.io/projected/1aeab6f0-46f5-41ac-a3b7-4d428ab7c321-kube-api-access-w4kg9\") pod \"keystone-operator-controller-manager-767fdc4f47-2vrjf\" (UID: \"1aeab6f0-46f5-41ac-a3b7-4d428ab7c321\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.330447 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j8bw\" (UniqueName: \"kubernetes.io/projected/1458adb7-c7bd-4a1d-8d55-1f9b9cd98e14-kube-api-access-6j8bw\") pod \"horizon-operator-controller-manager-77d5c5b54f-vhlsd\" (UID: \"1458adb7-c7bd-4a1d-8d55-1f9b9cd98e14\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.330469 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfksf\" (UniqueName: \"kubernetes.io/projected/3501119f-a33c-4069-bd8d-fe5fb5ef021b-kube-api-access-cfksf\") pod \"manila-operator-controller-manager-864f6b75bf-bzjfx\" (UID: \"3501119f-a33c-4069-bd8d-fe5fb5ef021b\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.330500 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x26gv\" (UniqueName: \"kubernetes.io/projected/0b29f776-9321-4967-bf9b-6fe35cf6c195-kube-api-access-x26gv\") pod \"glance-operator-controller-manager-c6994669c-mzmtp\" (UID: \"0b29f776-9321-4967-bf9b-6fe35cf6c195\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.330524 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8r7x\" (UniqueName: \"kubernetes.io/projected/45d870c0-6af6-4cb0-9704-2ffafb2c423c-kube-api-access-w8r7x\") pod \"ironic-operator-controller-manager-78757b4889-7wx4b\" (UID: \"45d870c0-6af6-4cb0-9704-2ffafb2c423c\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.330560 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqqxq\" (UniqueName: \"kubernetes.io/projected/ee913f4e-9e00-45c8-9af4-191ecef1a2ff-kube-api-access-sqqxq\") pod \"mariadb-operator-controller-manager-c87fff755-mc2n6\" (UID: \"ee913f4e-9e00-45c8-9af4-191ecef1a2ff\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.330586 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg75r\" (UniqueName: \"kubernetes.io/projected/d5cf00c9-700f-4f7b-98e1-626fdc638e32-kube-api-access-xg75r\") pod \"heat-operator-controller-manager-5d87976b78-jpmbw\" (UID: \"d5cf00c9-700f-4f7b-98e1-626fdc638e32\") " pod="openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw" Jan 20 04:03:22 crc kubenswrapper[4898]: E0120 04:03:22.331049 4898 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:22 crc kubenswrapper[4898]: E0120 04:03:22.331100 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert podName:c88cac18-f08f-4ad2-8bf2-21d27972223a nodeName:}" failed. No retries permitted until 2026-01-20 04:03:22.831084681 +0000 UTC m=+849.430872530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert") pod "infra-operator-controller-manager-77c48c7859-pzzk6" (UID: "c88cac18-f08f-4ad2-8bf2-21d27972223a") : secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.366451 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.367421 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.374715 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26gv\" (UniqueName: \"kubernetes.io/projected/0b29f776-9321-4967-bf9b-6fe35cf6c195-kube-api-access-x26gv\") pod \"glance-operator-controller-manager-c6994669c-mzmtp\" (UID: \"0b29f776-9321-4967-bf9b-6fe35cf6c195\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.375149 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sv67g" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.375208 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxtc\" (UniqueName: \"kubernetes.io/projected/c88cac18-f08f-4ad2-8bf2-21d27972223a-kube-api-access-cwxtc\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.375279 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8r7x\" (UniqueName: \"kubernetes.io/projected/45d870c0-6af6-4cb0-9704-2ffafb2c423c-kube-api-access-w8r7x\") pod \"ironic-operator-controller-manager-78757b4889-7wx4b\" (UID: \"45d870c0-6af6-4cb0-9704-2ffafb2c423c\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.375803 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg75r\" (UniqueName: \"kubernetes.io/projected/d5cf00c9-700f-4f7b-98e1-626fdc638e32-kube-api-access-xg75r\") pod \"heat-operator-controller-manager-5d87976b78-jpmbw\" (UID: \"d5cf00c9-700f-4f7b-98e1-626fdc638e32\") " pod="openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.380960 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.381782 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.383181 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-q5zhs" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.389205 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j8bw\" (UniqueName: \"kubernetes.io/projected/1458adb7-c7bd-4a1d-8d55-1f9b9cd98e14-kube-api-access-6j8bw\") pod \"horizon-operator-controller-manager-77d5c5b54f-vhlsd\" (UID: \"1458adb7-c7bd-4a1d-8d55-1f9b9cd98e14\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.403924 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.404861 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.410681 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dhdx7" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.422076 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.429210 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.431414 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4kg9\" (UniqueName: \"kubernetes.io/projected/1aeab6f0-46f5-41ac-a3b7-4d428ab7c321-kube-api-access-w4kg9\") pod \"keystone-operator-controller-manager-767fdc4f47-2vrjf\" (UID: \"1aeab6f0-46f5-41ac-a3b7-4d428ab7c321\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.431571 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvffv\" (UniqueName: \"kubernetes.io/projected/c2673446-b027-4a75-b0d3-e823d7da9b4b-kube-api-access-vvffv\") pod \"neutron-operator-controller-manager-cb4666565-pvq64\" (UID: \"c2673446-b027-4a75-b0d3-e823d7da9b4b\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.431611 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfksf\" (UniqueName: \"kubernetes.io/projected/3501119f-a33c-4069-bd8d-fe5fb5ef021b-kube-api-access-cfksf\") pod \"manila-operator-controller-manager-864f6b75bf-bzjfx\" (UID: \"3501119f-a33c-4069-bd8d-fe5fb5ef021b\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.431673 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqqxq\" (UniqueName: \"kubernetes.io/projected/ee913f4e-9e00-45c8-9af4-191ecef1a2ff-kube-api-access-sqqxq\") pod \"mariadb-operator-controller-manager-c87fff755-mc2n6\" (UID: \"ee913f4e-9e00-45c8-9af4-191ecef1a2ff\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.431841 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfdsg\" (UniqueName: \"kubernetes.io/projected/e4f2f74e-46b2-4c21-8b82-13f450218389-kube-api-access-qfdsg\") pod \"nova-operator-controller-manager-65849867d6-wh8bc\" (UID: \"e4f2f74e-46b2-4c21-8b82-13f450218389\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.432116 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.440965 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.446867 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.450168 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.450370 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.451201 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4kg9\" (UniqueName: \"kubernetes.io/projected/1aeab6f0-46f5-41ac-a3b7-4d428ab7c321-kube-api-access-w4kg9\") pod \"keystone-operator-controller-manager-767fdc4f47-2vrjf\" (UID: \"1aeab6f0-46f5-41ac-a3b7-4d428ab7c321\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.457325 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqqxq\" (UniqueName: \"kubernetes.io/projected/ee913f4e-9e00-45c8-9af4-191ecef1a2ff-kube-api-access-sqqxq\") pod \"mariadb-operator-controller-manager-c87fff755-mc2n6\" (UID: \"ee913f4e-9e00-45c8-9af4-191ecef1a2ff\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.458936 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.470621 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.472547 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.475291 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.475962 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wbgnh" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.476134 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.476958 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.477177 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfksf\" (UniqueName: \"kubernetes.io/projected/3501119f-a33c-4069-bd8d-fe5fb5ef021b-kube-api-access-cfksf\") pod \"manila-operator-controller-manager-864f6b75bf-bzjfx\" (UID: \"3501119f-a33c-4069-bd8d-fe5fb5ef021b\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.477893 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.478809 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-9c46f" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.493773 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.496536 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.503791 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.506484 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.507307 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.513203 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-wwrpq" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.515966 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.517013 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.518738 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-xk4fc" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.525299 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.532499 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhthm\" (UniqueName: \"kubernetes.io/projected/79a8ab34-c753-4e7d-8152-a62f7084c84e-kube-api-access-mhthm\") pod \"ovn-operator-controller-manager-55db956ddc-s42q8\" (UID: \"79a8ab34-c753-4e7d-8152-a62f7084c84e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.532563 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfdsg\" (UniqueName: \"kubernetes.io/projected/e4f2f74e-46b2-4c21-8b82-13f450218389-kube-api-access-qfdsg\") pod \"nova-operator-controller-manager-65849867d6-wh8bc\" (UID: \"e4f2f74e-46b2-4c21-8b82-13f450218389\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.532596 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xm5j\" (UniqueName: \"kubernetes.io/projected/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-kube-api-access-2xm5j\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk\" (UID: \"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.532639 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvffv\" (UniqueName: \"kubernetes.io/projected/c2673446-b027-4a75-b0d3-e823d7da9b4b-kube-api-access-vvffv\") pod \"neutron-operator-controller-manager-cb4666565-pvq64\" (UID: \"c2673446-b027-4a75-b0d3-e823d7da9b4b\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.532664 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqjvz\" (UniqueName: \"kubernetes.io/projected/e386e470-2db0-442a-8dd1-853ffe97e0f7-kube-api-access-lqjvz\") pod \"octavia-operator-controller-manager-7fc9b76cf6-5bnlz\" (UID: \"e386e470-2db0-442a-8dd1-853ffe97e0f7\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.532687 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk\" (UID: \"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.535730 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.558040 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvffv\" (UniqueName: \"kubernetes.io/projected/c2673446-b027-4a75-b0d3-e823d7da9b4b-kube-api-access-vvffv\") pod \"neutron-operator-controller-manager-cb4666565-pvq64\" (UID: \"c2673446-b027-4a75-b0d3-e823d7da9b4b\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.564371 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.564986 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.566954 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfdsg\" (UniqueName: \"kubernetes.io/projected/e4f2f74e-46b2-4c21-8b82-13f450218389-kube-api-access-qfdsg\") pod \"nova-operator-controller-manager-65849867d6-wh8bc\" (UID: \"e4f2f74e-46b2-4c21-8b82-13f450218389\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.574629 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.580627 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-kbg78" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.583689 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.651323 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.652143 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhthm\" (UniqueName: \"kubernetes.io/projected/79a8ab34-c753-4e7d-8152-a62f7084c84e-kube-api-access-mhthm\") pod \"ovn-operator-controller-manager-55db956ddc-s42q8\" (UID: \"79a8ab34-c753-4e7d-8152-a62f7084c84e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.652245 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xm5j\" (UniqueName: \"kubernetes.io/projected/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-kube-api-access-2xm5j\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk\" (UID: \"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.652308 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jscp8\" (UniqueName: \"kubernetes.io/projected/ed13994e-012c-4775-9bac-c35117a1630b-kube-api-access-jscp8\") pod \"swift-operator-controller-manager-85dd56d4cc-9wjw5\" (UID: \"ed13994e-012c-4775-9bac-c35117a1630b\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.652590 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqjvz\" (UniqueName: \"kubernetes.io/projected/e386e470-2db0-442a-8dd1-853ffe97e0f7-kube-api-access-lqjvz\") pod \"octavia-operator-controller-manager-7fc9b76cf6-5bnlz\" (UID: \"e386e470-2db0-442a-8dd1-853ffe97e0f7\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.652617 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.652636 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk\" (UID: \"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.652870 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsw8j\" (UniqueName: \"kubernetes.io/projected/b7b63269-33e7-4ef9-bf03-e37aac59ce07-kube-api-access-nsw8j\") pod \"placement-operator-controller-manager-686df47fcb-b4bkc\" (UID: \"b7b63269-33e7-4ef9-bf03-e37aac59ce07\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.652933 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bwqv\" (UniqueName: \"kubernetes.io/projected/dae6293c-0cae-4aff-a936-85ed72377a31-kube-api-access-4bwqv\") pod \"telemetry-operator-controller-manager-5f8f495fcf-jfvrm\" (UID: \"dae6293c-0cae-4aff-a936-85ed72377a31\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" Jan 20 04:03:22 crc kubenswrapper[4898]: E0120 04:03:22.662762 4898 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 04:03:22 crc kubenswrapper[4898]: E0120 04:03:22.662924 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert podName:56d05f5e-aa64-4ad6-94e0-aa14aa9317cb nodeName:}" failed. No retries permitted until 2026-01-20 04:03:23.162889594 +0000 UTC m=+849.762677453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" (UID: "56d05f5e-aa64-4ad6-94e0-aa14aa9317cb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.676482 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.679218 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.684269 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhthm\" (UniqueName: \"kubernetes.io/projected/79a8ab34-c753-4e7d-8152-a62f7084c84e-kube-api-access-mhthm\") pod \"ovn-operator-controller-manager-55db956ddc-s42q8\" (UID: \"79a8ab34-c753-4e7d-8152-a62f7084c84e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.684381 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-sxhx2" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.686722 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.688301 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqjvz\" (UniqueName: \"kubernetes.io/projected/e386e470-2db0-442a-8dd1-853ffe97e0f7-kube-api-access-lqjvz\") pod \"octavia-operator-controller-manager-7fc9b76cf6-5bnlz\" (UID: \"e386e470-2db0-442a-8dd1-853ffe97e0f7\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.697109 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xm5j\" (UniqueName: \"kubernetes.io/projected/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-kube-api-access-2xm5j\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk\" (UID: \"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.703271 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.704650 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.713137 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-5bpxr" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.721209 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.738317 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.757707 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.758066 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.758806 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jscp8\" (UniqueName: \"kubernetes.io/projected/ed13994e-012c-4775-9bac-c35117a1630b-kube-api-access-jscp8\") pod \"swift-operator-controller-manager-85dd56d4cc-9wjw5\" (UID: \"ed13994e-012c-4775-9bac-c35117a1630b\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.758890 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsw8j\" (UniqueName: \"kubernetes.io/projected/b7b63269-33e7-4ef9-bf03-e37aac59ce07-kube-api-access-nsw8j\") pod \"placement-operator-controller-manager-686df47fcb-b4bkc\" (UID: \"b7b63269-33e7-4ef9-bf03-e37aac59ce07\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.758926 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bwqv\" (UniqueName: \"kubernetes.io/projected/dae6293c-0cae-4aff-a936-85ed72377a31-kube-api-access-4bwqv\") pod \"telemetry-operator-controller-manager-5f8f495fcf-jfvrm\" (UID: \"dae6293c-0cae-4aff-a936-85ed72377a31\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.773191 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.774206 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.783085 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.783207 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bwqv\" (UniqueName: \"kubernetes.io/projected/dae6293c-0cae-4aff-a936-85ed72377a31-kube-api-access-4bwqv\") pod \"telemetry-operator-controller-manager-5f8f495fcf-jfvrm\" (UID: \"dae6293c-0cae-4aff-a936-85ed72377a31\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.783256 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.784047 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gszt6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.794924 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jscp8\" (UniqueName: \"kubernetes.io/projected/ed13994e-012c-4775-9bac-c35117a1630b-kube-api-access-jscp8\") pod \"swift-operator-controller-manager-85dd56d4cc-9wjw5\" (UID: \"ed13994e-012c-4775-9bac-c35117a1630b\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.802840 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsw8j\" (UniqueName: \"kubernetes.io/projected/b7b63269-33e7-4ef9-bf03-e37aac59ce07-kube-api-access-nsw8j\") pod \"placement-operator-controller-manager-686df47fcb-b4bkc\" (UID: \"b7b63269-33e7-4ef9-bf03-e37aac59ce07\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.805394 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.830710 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.830785 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.861886 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.862051 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx25d\" (UniqueName: \"kubernetes.io/projected/0c3a8459-142e-4e4d-8546-220b3feec6ec-kube-api-access-bx25d\") pod \"watcher-operator-controller-manager-64cd966744-56hrd\" (UID: \"0c3a8459-142e-4e4d-8546-220b3feec6ec\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.862085 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btqlt\" (UniqueName: \"kubernetes.io/projected/e40924a1-c172-4372-8adb-3919447c7207-kube-api-access-btqlt\") pod \"test-operator-controller-manager-7cd8bc9dbb-74nht\" (UID: \"e40924a1-c172-4372-8adb-3919447c7207\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.863313 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5" Jan 20 04:03:22 crc kubenswrapper[4898]: E0120 04:03:22.864619 4898 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:22 crc kubenswrapper[4898]: E0120 04:03:22.864668 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert podName:c88cac18-f08f-4ad2-8bf2-21d27972223a nodeName:}" failed. No retries permitted until 2026-01-20 04:03:23.864648674 +0000 UTC m=+850.464436743 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert") pod "infra-operator-controller-manager-77c48c7859-pzzk6" (UID: "c88cac18-f08f-4ad2-8bf2-21d27972223a") : secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.884824 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.885789 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.888811 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-9r74x" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.893655 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr"] Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.965972 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.966051 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.966881 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2dq7\" (UniqueName: \"kubernetes.io/projected/6071d625-ea99-445e-a23c-31cf9e37b1f6-kube-api-access-t2dq7\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.966917 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx25d\" (UniqueName: \"kubernetes.io/projected/0c3a8459-142e-4e4d-8546-220b3feec6ec-kube-api-access-bx25d\") pod \"watcher-operator-controller-manager-64cd966744-56hrd\" (UID: \"0c3a8459-142e-4e4d-8546-220b3feec6ec\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.966996 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btqlt\" (UniqueName: \"kubernetes.io/projected/e40924a1-c172-4372-8adb-3919447c7207-kube-api-access-btqlt\") pod \"test-operator-controller-manager-7cd8bc9dbb-74nht\" (UID: \"e40924a1-c172-4372-8adb-3919447c7207\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.986739 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btqlt\" (UniqueName: \"kubernetes.io/projected/e40924a1-c172-4372-8adb-3919447c7207-kube-api-access-btqlt\") pod \"test-operator-controller-manager-7cd8bc9dbb-74nht\" (UID: \"e40924a1-c172-4372-8adb-3919447c7207\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.988738 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" Jan 20 04:03:22 crc kubenswrapper[4898]: I0120 04:03:22.989891 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx25d\" (UniqueName: \"kubernetes.io/projected/0c3a8459-142e-4e4d-8546-220b3feec6ec-kube-api-access-bx25d\") pod \"watcher-operator-controller-manager-64cd966744-56hrd\" (UID: \"0c3a8459-142e-4e4d-8546-220b3feec6ec\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.016169 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.032805 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.057714 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk"] Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.068176 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.068236 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.068270 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2dq7\" (UniqueName: \"kubernetes.io/projected/6071d625-ea99-445e-a23c-31cf9e37b1f6-kube-api-access-t2dq7\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.068311 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q46lq\" (UniqueName: \"kubernetes.io/projected/0985373f-27d2-41cb-ba87-5d5845588c6b-kube-api-access-q46lq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zjksr\" (UID: \"0985373f-27d2-41cb-ba87-5d5845588c6b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr" Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.068488 4898 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.068554 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:23.568537482 +0000 UTC m=+850.168325331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "metrics-server-cert" not found Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.068687 4898 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.068742 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:23.568727438 +0000 UTC m=+850.168515297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "webhook-server-cert" not found Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.091255 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2dq7\" (UniqueName: \"kubernetes.io/projected/6071d625-ea99-445e-a23c-31cf9e37b1f6-kube-api-access-t2dq7\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.097233 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl"] Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.169311 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk\" (UID: \"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.169623 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46lq\" (UniqueName: \"kubernetes.io/projected/0985373f-27d2-41cb-ba87-5d5845588c6b-kube-api-access-q46lq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zjksr\" (UID: \"0985373f-27d2-41cb-ba87-5d5845588c6b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr" Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.170009 4898 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.170053 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert podName:56d05f5e-aa64-4ad6-94e0-aa14aa9317cb nodeName:}" failed. No retries permitted until 2026-01-20 04:03:24.170040952 +0000 UTC m=+850.769828811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" (UID: "56d05f5e-aa64-4ad6-94e0-aa14aa9317cb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.203443 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q46lq\" (UniqueName: \"kubernetes.io/projected/0985373f-27d2-41cb-ba87-5d5845588c6b-kube-api-access-q46lq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zjksr\" (UID: \"0985373f-27d2-41cb-ba87-5d5845588c6b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr" Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.251941 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr" Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.264605 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl" event={"ID":"25439b6f-5a2a-4577-afd8-787d44877848","Type":"ContainerStarted","Data":"b73008c0b823bc33399a16cc56e607a52f16a318035fadba290a133594896e4f"} Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.269864 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk" event={"ID":"765cfe55-b774-4345-8884-ca22330cf340","Type":"ContainerStarted","Data":"08431cf6805730a987557f20add037c7a9c6c5d418fa821b01eebb55528c8527"} Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.296504 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8"] Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.306153 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd"] Jan 20 04:03:23 crc kubenswrapper[4898]: W0120 04:03:23.413690 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0389b651_5be7_45a2_bba7_d204285978e7.slice/crio-8155f92b71675f339fab0fe111d0c905f6852de14227f3ad82d543447bc10e7c WatchSource:0}: Error finding container 8155f92b71675f339fab0fe111d0c905f6852de14227f3ad82d543447bc10e7c: Status 404 returned error can't find the container with id 8155f92b71675f339fab0fe111d0c905f6852de14227f3ad82d543447bc10e7c Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.574067 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.574377 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.574198 4898 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.574592 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:24.574576235 +0000 UTC m=+851.174364094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "webhook-server-cert" not found Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.574534 4898 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.574930 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:24.574921846 +0000 UTC m=+851.174709705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "metrics-server-cert" not found Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.597046 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw"] Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.612341 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b"] Jan 20 04:03:23 crc kubenswrapper[4898]: I0120 04:03:23.878116 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.878287 4898 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:23 crc kubenswrapper[4898]: E0120 04:03:23.878334 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert podName:c88cac18-f08f-4ad2-8bf2-21d27972223a nodeName:}" failed. No retries permitted until 2026-01-20 04:03:25.87831928 +0000 UTC m=+852.478107139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert") pod "infra-operator-controller-manager-77c48c7859-pzzk6" (UID: "c88cac18-f08f-4ad2-8bf2-21d27972223a") : secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.062219 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx"] Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.070810 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp"] Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.080525 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6"] Jan 20 04:03:24 crc kubenswrapper[4898]: W0120 04:03:24.095741 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded13994e_012c_4775_9bac_c35117a1630b.slice/crio-6af04ceb52a1a8dd094b25f7a3414cf537c8b3ff73b0ef8eb9ef935e6ab50f6a WatchSource:0}: Error finding container 6af04ceb52a1a8dd094b25f7a3414cf537c8b3ff73b0ef8eb9ef935e6ab50f6a: Status 404 returned error can't find the container with id 6af04ceb52a1a8dd094b25f7a3414cf537c8b3ff73b0ef8eb9ef935e6ab50f6a Jan 20 04:03:24 crc kubenswrapper[4898]: W0120 04:03:24.100479 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee913f4e_9e00_45c8_9af4_191ecef1a2ff.slice/crio-7fc84d91afbf2bebfbc284838800a050ef8467312a7af1fef95a3f4a15248339 WatchSource:0}: Error finding container 7fc84d91afbf2bebfbc284838800a050ef8467312a7af1fef95a3f4a15248339: Status 404 returned error can't find the container with id 7fc84d91afbf2bebfbc284838800a050ef8467312a7af1fef95a3f4a15248339 Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.114017 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc"] Jan 20 04:03:24 crc kubenswrapper[4898]: W0120 04:03:24.132536 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2673446_b027_4a75_b0d3_e823d7da9b4b.slice/crio-cc32b792512a2593827b92a6ee1581a4acc154831a2226030587f2a692a095b3 WatchSource:0}: Error finding container cc32b792512a2593827b92a6ee1581a4acc154831a2226030587f2a692a095b3: Status 404 returned error can't find the container with id cc32b792512a2593827b92a6ee1581a4acc154831a2226030587f2a692a095b3 Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.133783 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5"] Jan 20 04:03:24 crc kubenswrapper[4898]: W0120 04:03:24.139857 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7b63269_33e7_4ef9_bf03_e37aac59ce07.slice/crio-d44d5efae31d84c15e226d3d8703255b1e797d52098ede67ae6ba2223673329a WatchSource:0}: Error finding container d44d5efae31d84c15e226d3d8703255b1e797d52098ede67ae6ba2223673329a: Status 404 returned error can't find the container with id d44d5efae31d84c15e226d3d8703255b1e797d52098ede67ae6ba2223673329a Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.149310 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf"] Jan 20 04:03:24 crc kubenswrapper[4898]: W0120 04:03:24.151881 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79a8ab34_c753_4e7d_8152_a62f7084c84e.slice/crio-5b327a2dd7bf2be10cc5b80be1679834fcd5a6f2b89b66238122d14a16ab5d57 WatchSource:0}: Error finding container 5b327a2dd7bf2be10cc5b80be1679834fcd5a6f2b89b66238122d14a16ab5d57: Status 404 returned error can't find the container with id 5b327a2dd7bf2be10cc5b80be1679834fcd5a6f2b89b66238122d14a16ab5d57 Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.155200 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64"] Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.164728 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4bwqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-jfvrm_openstack-operators(dae6293c-0cae-4aff-a936-85ed72377a31): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.166194 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" podUID="dae6293c-0cae-4aff-a936-85ed72377a31" Jan 20 04:03:24 crc kubenswrapper[4898]: W0120 04:03:24.166738 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode40924a1_c172_4372_8adb_3919447c7207.slice/crio-90560ed8639fca2b54a40f3bfee1409d7d71c5a7bfc367d5af462e90355c2ec2 WatchSource:0}: Error finding container 90560ed8639fca2b54a40f3bfee1409d7d71c5a7bfc367d5af462e90355c2ec2: Status 404 returned error can't find the container with id 90560ed8639fca2b54a40f3bfee1409d7d71c5a7bfc367d5af462e90355c2ec2 Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.169655 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-btqlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-74nht_openstack-operators(e40924a1-c172-4372-8adb-3919447c7207): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.171137 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" podUID="e40924a1-c172-4372-8adb-3919447c7207" Jan 20 04:03:24 crc kubenswrapper[4898]: W0120 04:03:24.173882 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c3a8459_142e_4e4d_8546_220b3feec6ec.slice/crio-bcab46c0b5c990245c68473b5f201a44a62bef9b54fda4f417f6ba8db6910e1c WatchSource:0}: Error finding container bcab46c0b5c990245c68473b5f201a44a62bef9b54fda4f417f6ba8db6910e1c: Status 404 returned error can't find the container with id bcab46c0b5c990245c68473b5f201a44a62bef9b54fda4f417f6ba8db6910e1c Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.175413 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc"] Jan 20 04:03:24 crc kubenswrapper[4898]: W0120 04:03:24.179029 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode386e470_2db0_442a_8dd1_853ffe97e0f7.slice/crio-11a6d6cb0d1a1ae6d33dae08eb95a3c961b886c506e006a50f1c10e325460c31 WatchSource:0}: Error finding container 11a6d6cb0d1a1ae6d33dae08eb95a3c961b886c506e006a50f1c10e325460c31: Status 404 returned error can't find the container with id 11a6d6cb0d1a1ae6d33dae08eb95a3c961b886c506e006a50f1c10e325460c31 Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.180919 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8"] Jan 20 04:03:24 crc kubenswrapper[4898]: W0120 04:03:24.181744 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0985373f_27d2_41cb_ba87_5d5845588c6b.slice/crio-035b111e9e394f2b97494e0d12a0478140e868b0586b37351a784be48b5c1585 WatchSource:0}: Error finding container 035b111e9e394f2b97494e0d12a0478140e868b0586b37351a784be48b5c1585: Status 404 returned error can't find the container with id 035b111e9e394f2b97494e0d12a0478140e868b0586b37351a784be48b5c1585 Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.183613 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk\" (UID: \"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.183769 4898 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.183844 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert podName:56d05f5e-aa64-4ad6-94e0-aa14aa9317cb nodeName:}" failed. No retries permitted until 2026-01-20 04:03:26.183803161 +0000 UTC m=+852.783591010 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" (UID: "56d05f5e-aa64-4ad6-94e0-aa14aa9317cb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.184364 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bx25d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-56hrd_openstack-operators(0c3a8459-142e-4e4d-8546-220b3feec6ec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.184663 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd"] Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.185572 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" podUID="0c3a8459-142e-4e4d-8546-220b3feec6ec" Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.185684 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lqjvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-5bnlz_openstack-operators(e386e470-2db0-442a-8dd1-853ffe97e0f7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.186872 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" podUID="e386e470-2db0-442a-8dd1-853ffe97e0f7" Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.188745 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz"] Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.190536 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q46lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zjksr_openstack-operators(0985373f-27d2-41cb-ba87-5d5845588c6b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.191994 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr" podUID="0985373f-27d2-41cb-ba87-5d5845588c6b" Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.193410 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr"] Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.197555 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht"] Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.201354 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm"] Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.283362 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8" event={"ID":"79a8ab34-c753-4e7d-8152-a62f7084c84e","Type":"ContainerStarted","Data":"5b327a2dd7bf2be10cc5b80be1679834fcd5a6f2b89b66238122d14a16ab5d57"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.285552 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf" event={"ID":"1aeab6f0-46f5-41ac-a3b7-4d428ab7c321","Type":"ContainerStarted","Data":"dcdb307207a50dc722e784044d25a1e022546715dc4184563b8933dd57637461"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.287106 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw" event={"ID":"d5cf00c9-700f-4f7b-98e1-626fdc638e32","Type":"ContainerStarted","Data":"a8409a9c03a0a6e9dd5053ff17c10ee6762cf89af83f015d568d130ba17e8f29"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.288329 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc" event={"ID":"e4f2f74e-46b2-4c21-8b82-13f450218389","Type":"ContainerStarted","Data":"226260da6e1dc263fe53c604c0b0efa09f76df10f6ceafa0833fc949c453f267"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.289666 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b" event={"ID":"45d870c0-6af6-4cb0-9704-2ffafb2c423c","Type":"ContainerStarted","Data":"a6f6ebea5e419ee958b2b31207b08ae79c6f5870a4115be13a4e38da1063dfbd"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.292688 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc" event={"ID":"b7b63269-33e7-4ef9-bf03-e37aac59ce07","Type":"ContainerStarted","Data":"d44d5efae31d84c15e226d3d8703255b1e797d52098ede67ae6ba2223673329a"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.294032 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8" event={"ID":"0389b651-5be7-45a2-bba7-d204285978e7","Type":"ContainerStarted","Data":"8155f92b71675f339fab0fe111d0c905f6852de14227f3ad82d543447bc10e7c"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.295281 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" event={"ID":"dae6293c-0cae-4aff-a936-85ed72377a31","Type":"ContainerStarted","Data":"5ee17513586990e2b9c831b3cca10ad2057ecee24f56e41645c7f72ceae3b0d6"} Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.296529 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" podUID="dae6293c-0cae-4aff-a936-85ed72377a31" Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.296630 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr" event={"ID":"0985373f-27d2-41cb-ba87-5d5845588c6b","Type":"ContainerStarted","Data":"035b111e9e394f2b97494e0d12a0478140e868b0586b37351a784be48b5c1585"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.297694 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd" event={"ID":"1458adb7-c7bd-4a1d-8d55-1f9b9cd98e14","Type":"ContainerStarted","Data":"e725276a22954a0f3fa304ca4d973f4447fa708485421717b1046d4137ff48a7"} Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.298146 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr" podUID="0985373f-27d2-41cb-ba87-5d5845588c6b" Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.299735 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64" event={"ID":"c2673446-b027-4a75-b0d3-e823d7da9b4b","Type":"ContainerStarted","Data":"cc32b792512a2593827b92a6ee1581a4acc154831a2226030587f2a692a095b3"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.307002 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6" event={"ID":"ee913f4e-9e00-45c8-9af4-191ecef1a2ff","Type":"ContainerStarted","Data":"7fc84d91afbf2bebfbc284838800a050ef8467312a7af1fef95a3f4a15248339"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.311965 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" event={"ID":"0c3a8459-142e-4e4d-8546-220b3feec6ec","Type":"ContainerStarted","Data":"bcab46c0b5c990245c68473b5f201a44a62bef9b54fda4f417f6ba8db6910e1c"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.313155 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" event={"ID":"e386e470-2db0-442a-8dd1-853ffe97e0f7","Type":"ContainerStarted","Data":"11a6d6cb0d1a1ae6d33dae08eb95a3c961b886c506e006a50f1c10e325460c31"} Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.314226 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" podUID="e386e470-2db0-442a-8dd1-853ffe97e0f7" Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.315045 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" podUID="0c3a8459-142e-4e4d-8546-220b3feec6ec" Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.315320 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx" event={"ID":"3501119f-a33c-4069-bd8d-fe5fb5ef021b","Type":"ContainerStarted","Data":"7a873e95fc5437b8324f0ae8bc6df6ffd2fff143d1784ec02a24d1030cd07dd4"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.316813 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" event={"ID":"e40924a1-c172-4372-8adb-3919447c7207","Type":"ContainerStarted","Data":"90560ed8639fca2b54a40f3bfee1409d7d71c5a7bfc367d5af462e90355c2ec2"} Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.318524 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" podUID="e40924a1-c172-4372-8adb-3919447c7207" Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.319526 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5" event={"ID":"ed13994e-012c-4775-9bac-c35117a1630b","Type":"ContainerStarted","Data":"6af04ceb52a1a8dd094b25f7a3414cf537c8b3ff73b0ef8eb9ef935e6ab50f6a"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.321794 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp" event={"ID":"0b29f776-9321-4967-bf9b-6fe35cf6c195","Type":"ContainerStarted","Data":"ded769da97dfc79de8e1fefd2986ce80bd2063bddaa9da81322d05fbb6fd2069"} Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.589776 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:24 crc kubenswrapper[4898]: I0120 04:03:24.590181 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.590051 4898 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.590387 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:26.590369168 +0000 UTC m=+853.190157027 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "metrics-server-cert" not found Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.590322 4898 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 04:03:24 crc kubenswrapper[4898]: E0120 04:03:24.590720 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:26.590651367 +0000 UTC m=+853.190439226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "webhook-server-cert" not found Jan 20 04:03:25 crc kubenswrapper[4898]: E0120 04:03:25.329758 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" podUID="e386e470-2db0-442a-8dd1-853ffe97e0f7" Jan 20 04:03:25 crc kubenswrapper[4898]: E0120 04:03:25.330165 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" podUID="0c3a8459-142e-4e4d-8546-220b3feec6ec" Jan 20 04:03:25 crc kubenswrapper[4898]: E0120 04:03:25.330533 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" podUID="e40924a1-c172-4372-8adb-3919447c7207" Jan 20 04:03:25 crc kubenswrapper[4898]: E0120 04:03:25.333642 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" podUID="dae6293c-0cae-4aff-a936-85ed72377a31" Jan 20 04:03:25 crc kubenswrapper[4898]: E0120 04:03:25.333662 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr" podUID="0985373f-27d2-41cb-ba87-5d5845588c6b" Jan 20 04:03:25 crc kubenswrapper[4898]: E0120 04:03:25.910266 4898 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:25 crc kubenswrapper[4898]: I0120 04:03:25.910573 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:25 crc kubenswrapper[4898]: E0120 04:03:25.910946 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert podName:c88cac18-f08f-4ad2-8bf2-21d27972223a nodeName:}" failed. No retries permitted until 2026-01-20 04:03:29.910908588 +0000 UTC m=+856.510696667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert") pod "infra-operator-controller-manager-77c48c7859-pzzk6" (UID: "c88cac18-f08f-4ad2-8bf2-21d27972223a") : secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:26 crc kubenswrapper[4898]: I0120 04:03:26.217702 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk\" (UID: \"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:26 crc kubenswrapper[4898]: E0120 04:03:26.218003 4898 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 04:03:26 crc kubenswrapper[4898]: E0120 04:03:26.218108 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert podName:56d05f5e-aa64-4ad6-94e0-aa14aa9317cb nodeName:}" failed. No retries permitted until 2026-01-20 04:03:30.218090392 +0000 UTC m=+856.817878251 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" (UID: "56d05f5e-aa64-4ad6-94e0-aa14aa9317cb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 04:03:26 crc kubenswrapper[4898]: I0120 04:03:26.624653 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:26 crc kubenswrapper[4898]: I0120 04:03:26.624777 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:26 crc kubenswrapper[4898]: E0120 04:03:26.624928 4898 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 04:03:26 crc kubenswrapper[4898]: E0120 04:03:26.625000 4898 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 04:03:26 crc kubenswrapper[4898]: E0120 04:03:26.625030 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:30.62500584 +0000 UTC m=+857.224793709 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "webhook-server-cert" not found Jan 20 04:03:26 crc kubenswrapper[4898]: E0120 04:03:26.625095 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:30.625069472 +0000 UTC m=+857.224857341 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "metrics-server-cert" not found Jan 20 04:03:29 crc kubenswrapper[4898]: I0120 04:03:29.915625 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:29 crc kubenswrapper[4898]: E0120 04:03:29.915938 4898 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:29 crc kubenswrapper[4898]: E0120 04:03:29.916192 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert podName:c88cac18-f08f-4ad2-8bf2-21d27972223a nodeName:}" failed. No retries permitted until 2026-01-20 04:03:37.916172648 +0000 UTC m=+864.515960507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert") pod "infra-operator-controller-manager-77c48c7859-pzzk6" (UID: "c88cac18-f08f-4ad2-8bf2-21d27972223a") : secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:30 crc kubenswrapper[4898]: I0120 04:03:30.220286 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk\" (UID: \"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:30 crc kubenswrapper[4898]: E0120 04:03:30.220504 4898 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 04:03:30 crc kubenswrapper[4898]: E0120 04:03:30.220637 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert podName:56d05f5e-aa64-4ad6-94e0-aa14aa9317cb nodeName:}" failed. No retries permitted until 2026-01-20 04:03:38.220621666 +0000 UTC m=+864.820409525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" (UID: "56d05f5e-aa64-4ad6-94e0-aa14aa9317cb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 04:03:30 crc kubenswrapper[4898]: I0120 04:03:30.626500 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:30 crc kubenswrapper[4898]: I0120 04:03:30.626638 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:30 crc kubenswrapper[4898]: E0120 04:03:30.626766 4898 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 04:03:30 crc kubenswrapper[4898]: E0120 04:03:30.626849 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:38.626826332 +0000 UTC m=+865.226614201 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "webhook-server-cert" not found Jan 20 04:03:30 crc kubenswrapper[4898]: E0120 04:03:30.626880 4898 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 04:03:30 crc kubenswrapper[4898]: E0120 04:03:30.626957 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:38.626932316 +0000 UTC m=+865.226720215 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "metrics-server-cert" not found Jan 20 04:03:37 crc kubenswrapper[4898]: I0120 04:03:37.953574 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:37 crc kubenswrapper[4898]: E0120 04:03:37.953787 4898 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:37 crc kubenswrapper[4898]: E0120 04:03:37.954296 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert podName:c88cac18-f08f-4ad2-8bf2-21d27972223a nodeName:}" failed. No retries permitted until 2026-01-20 04:03:53.954278083 +0000 UTC m=+880.554065942 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert") pod "infra-operator-controller-manager-77c48c7859-pzzk6" (UID: "c88cac18-f08f-4ad2-8bf2-21d27972223a") : secret "infra-operator-webhook-server-cert" not found Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.257805 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk\" (UID: \"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.266633 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56d05f5e-aa64-4ad6-94e0-aa14aa9317cb-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk\" (UID: \"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.395014 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.430475 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc" event={"ID":"b7b63269-33e7-4ef9-bf03-e37aac59ce07","Type":"ContainerStarted","Data":"a735d67af46628a7d42b83f674aad9b56d35e2577bdfc58bd9e1e4586d717b15"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.431358 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.443653 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8" event={"ID":"0389b651-5be7-45a2-bba7-d204285978e7","Type":"ContainerStarted","Data":"de78b319f101d5becc2fb28cff9e6ec17ac03a900b1c091034fe07b855a172cb"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.444303 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.454596 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6" event={"ID":"ee913f4e-9e00-45c8-9af4-191ecef1a2ff","Type":"ContainerStarted","Data":"d885b3b1306ddf8803ccf9e3d228a1e17537b65baec81441502807fc67d73fa7"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.455028 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.462204 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8" event={"ID":"79a8ab34-c753-4e7d-8152-a62f7084c84e","Type":"ContainerStarted","Data":"32352a7e7892294789d17bf4cc6b6dcef3c993d91cdb680ea979984b78da55a2"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.462725 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.482878 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk" event={"ID":"765cfe55-b774-4345-8884-ca22330cf340","Type":"ContainerStarted","Data":"2fb515a4a4a20157579c3b53724276cce67a082f9a82a453d936b793c14aae88"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.483520 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.486791 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc" podStartSLOduration=3.364081626 podStartE2EDuration="16.486781812s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.153013217 +0000 UTC m=+850.752801076" lastFinishedPulling="2026-01-20 04:03:37.275713393 +0000 UTC m=+863.875501262" observedRunningTime="2026-01-20 04:03:38.477165878 +0000 UTC m=+865.076953737" watchObservedRunningTime="2026-01-20 04:03:38.486781812 +0000 UTC m=+865.086569671" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.520626 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp" event={"ID":"0b29f776-9321-4967-bf9b-6fe35cf6c195","Type":"ContainerStarted","Data":"9b34fa0fd1ea8987a334ee6d371b36715767afe31f6d59a0e385f1a31b724353"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.521195 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.536006 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64" event={"ID":"c2673446-b027-4a75-b0d3-e823d7da9b4b","Type":"ContainerStarted","Data":"2e21ea589af45ebef1aaa079e5f39489d7b6bf7c36223130018abac1b8737c8d"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.536040 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.551927 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx" event={"ID":"3501119f-a33c-4069-bd8d-fe5fb5ef021b","Type":"ContainerStarted","Data":"f8f6cf7fb27c90a384778c52fad22cdcdb4dce39250af448300f2924d7fc511f"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.552609 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.569027 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b" event={"ID":"45d870c0-6af6-4cb0-9704-2ffafb2c423c","Type":"ContainerStarted","Data":"9cb0bdc9e85360a60e8cc86bd4f1fcf7b9023be49e877e698bdba8333a052f78"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.569888 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.582347 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf" event={"ID":"1aeab6f0-46f5-41ac-a3b7-4d428ab7c321","Type":"ContainerStarted","Data":"9c4fbbf490e6bc1887ad1e8743cb2ea1b04e7953f5cc0bf129b5540cf8d67749"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.583023 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.603087 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd" event={"ID":"1458adb7-c7bd-4a1d-8d55-1f9b9cd98e14","Type":"ContainerStarted","Data":"899223ac285cc1672337fdbbbc1bb0b1ec19eaa2704c5a3750bd905ead324c0f"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.604009 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.628767 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw" event={"ID":"d5cf00c9-700f-4f7b-98e1-626fdc638e32","Type":"ContainerStarted","Data":"87c0d7de587f733af810ed381ce990e2420aacca05b43fd3cf2ad9f325cd812e"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.630551 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.643838 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5" event={"ID":"ed13994e-012c-4775-9bac-c35117a1630b","Type":"ContainerStarted","Data":"bb5de382510b18321c88317036e465ef7665dd2bdb558a6a3972cc86472a2444"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.644729 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.663560 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6" podStartSLOduration=3.476933035 podStartE2EDuration="16.663533122s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.10914872 +0000 UTC m=+850.708936579" lastFinishedPulling="2026-01-20 04:03:37.295748807 +0000 UTC m=+863.895536666" observedRunningTime="2026-01-20 04:03:38.656927303 +0000 UTC m=+865.256715162" watchObservedRunningTime="2026-01-20 04:03:38.663533122 +0000 UTC m=+865.263320981" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.671324 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.685948 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:38 crc kubenswrapper[4898]: E0120 04:03:38.671489 4898 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 04:03:38 crc kubenswrapper[4898]: E0120 04:03:38.686641 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:54.686454507 +0000 UTC m=+881.286242366 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "webhook-server-cert" not found Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.687164 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8" podStartSLOduration=7.658319485 podStartE2EDuration="16.687153668s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:23.417685344 +0000 UTC m=+850.017473203" lastFinishedPulling="2026-01-20 04:03:32.446519527 +0000 UTC m=+859.046307386" observedRunningTime="2026-01-20 04:03:38.579519335 +0000 UTC m=+865.179307194" watchObservedRunningTime="2026-01-20 04:03:38.687153668 +0000 UTC m=+865.286941527" Jan 20 04:03:38 crc kubenswrapper[4898]: E0120 04:03:38.688636 4898 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 04:03:38 crc kubenswrapper[4898]: E0120 04:03:38.688740 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs podName:6071d625-ea99-445e-a23c-31cf9e37b1f6 nodeName:}" failed. No retries permitted until 2026-01-20 04:03:54.688713668 +0000 UTC m=+881.288501517 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs") pod "openstack-operator-controller-manager-6cbf4594b6-vxpqs" (UID: "6071d625-ea99-445e-a23c-31cf9e37b1f6") : secret "metrics-server-cert" not found Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.674535 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl" event={"ID":"25439b6f-5a2a-4577-afd8-787d44877848","Type":"ContainerStarted","Data":"9c2dba2497917a4501944c304d156768291170229fc8bc3d4648e7490eff6d19"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.689225 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.701727 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8" podStartSLOduration=3.60675625 podStartE2EDuration="16.701709309s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.159780331 +0000 UTC m=+850.759568190" lastFinishedPulling="2026-01-20 04:03:37.25473339 +0000 UTC m=+863.854521249" observedRunningTime="2026-01-20 04:03:38.695287876 +0000 UTC m=+865.295075735" watchObservedRunningTime="2026-01-20 04:03:38.701709309 +0000 UTC m=+865.301497168" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.727687 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc" event={"ID":"e4f2f74e-46b2-4c21-8b82-13f450218389","Type":"ContainerStarted","Data":"74d4f1cef7cbb2d6bed8014662f724389be6bff4f2615668ed6b4092bde9edd9"} Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.729083 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.774914 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw" podStartSLOduration=3.75094449 podStartE2EDuration="16.774894343s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:23.607641921 +0000 UTC m=+850.207429780" lastFinishedPulling="2026-01-20 04:03:36.631591774 +0000 UTC m=+863.231379633" observedRunningTime="2026-01-20 04:03:38.756661997 +0000 UTC m=+865.356449856" watchObservedRunningTime="2026-01-20 04:03:38.774894343 +0000 UTC m=+865.374682202" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.807374 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp" podStartSLOduration=3.65573529 podStartE2EDuration="16.80734486s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.10218899 +0000 UTC m=+850.701976849" lastFinishedPulling="2026-01-20 04:03:37.25379856 +0000 UTC m=+863.853586419" observedRunningTime="2026-01-20 04:03:38.805839032 +0000 UTC m=+865.405626881" watchObservedRunningTime="2026-01-20 04:03:38.80734486 +0000 UTC m=+865.407132719" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.867020 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5" podStartSLOduration=3.712984749 podStartE2EDuration="16.866999156s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.101538129 +0000 UTC m=+850.701325988" lastFinishedPulling="2026-01-20 04:03:37.255552526 +0000 UTC m=+863.855340395" observedRunningTime="2026-01-20 04:03:38.851446424 +0000 UTC m=+865.451234283" watchObservedRunningTime="2026-01-20 04:03:38.866999156 +0000 UTC m=+865.466787015" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.940046 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk" podStartSLOduration=5.241015971 podStartE2EDuration="16.940024885s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:23.197633114 +0000 UTC m=+849.797420973" lastFinishedPulling="2026-01-20 04:03:34.896642018 +0000 UTC m=+861.496429887" observedRunningTime="2026-01-20 04:03:38.90505693 +0000 UTC m=+865.504844789" watchObservedRunningTime="2026-01-20 04:03:38.940024885 +0000 UTC m=+865.539812744" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.940252 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b" podStartSLOduration=3.923192636 podStartE2EDuration="16.940247602s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:23.61460866 +0000 UTC m=+850.214396519" lastFinishedPulling="2026-01-20 04:03:36.631663626 +0000 UTC m=+863.231451485" observedRunningTime="2026-01-20 04:03:38.93319941 +0000 UTC m=+865.532987259" watchObservedRunningTime="2026-01-20 04:03:38.940247602 +0000 UTC m=+865.540035461" Jan 20 04:03:38 crc kubenswrapper[4898]: I0120 04:03:38.978141 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd" podStartSLOduration=3.776745986 podStartE2EDuration="16.978124491s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:23.430305272 +0000 UTC m=+850.030093131" lastFinishedPulling="2026-01-20 04:03:36.631683777 +0000 UTC m=+863.231471636" observedRunningTime="2026-01-20 04:03:38.975834517 +0000 UTC m=+865.575622376" watchObservedRunningTime="2026-01-20 04:03:38.978124491 +0000 UTC m=+865.577912350" Jan 20 04:03:39 crc kubenswrapper[4898]: I0120 04:03:39.018946 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf" podStartSLOduration=3.816737131 podStartE2EDuration="17.018927961s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.126360765 +0000 UTC m=+850.726148624" lastFinishedPulling="2026-01-20 04:03:37.328551585 +0000 UTC m=+863.928339454" observedRunningTime="2026-01-20 04:03:39.016494154 +0000 UTC m=+865.616282013" watchObservedRunningTime="2026-01-20 04:03:39.018927961 +0000 UTC m=+865.618715820" Jan 20 04:03:39 crc kubenswrapper[4898]: I0120 04:03:39.076685 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx" podStartSLOduration=3.91032532 podStartE2EDuration="17.076670916s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.10788882 +0000 UTC m=+850.707676679" lastFinishedPulling="2026-01-20 04:03:37.274234416 +0000 UTC m=+863.874022275" observedRunningTime="2026-01-20 04:03:39.072749493 +0000 UTC m=+865.672537342" watchObservedRunningTime="2026-01-20 04:03:39.076670916 +0000 UTC m=+865.676458775" Jan 20 04:03:39 crc kubenswrapper[4898]: I0120 04:03:39.103268 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl" podStartSLOduration=3.007082356 podStartE2EDuration="17.103248907s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:23.157546117 +0000 UTC m=+849.757333976" lastFinishedPulling="2026-01-20 04:03:37.253712668 +0000 UTC m=+863.853500527" observedRunningTime="2026-01-20 04:03:39.051466649 +0000 UTC m=+865.651254508" watchObservedRunningTime="2026-01-20 04:03:39.103248907 +0000 UTC m=+865.703036756" Jan 20 04:03:39 crc kubenswrapper[4898]: I0120 04:03:39.111782 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64" podStartSLOduration=3.957216464 podStartE2EDuration="17.111765907s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.147789832 +0000 UTC m=+850.747577691" lastFinishedPulling="2026-01-20 04:03:37.302339265 +0000 UTC m=+863.902127134" observedRunningTime="2026-01-20 04:03:39.085583348 +0000 UTC m=+865.685371207" watchObservedRunningTime="2026-01-20 04:03:39.111765907 +0000 UTC m=+865.711553766" Jan 20 04:03:39 crc kubenswrapper[4898]: I0120 04:03:39.121307 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc" podStartSLOduration=3.874886417 podStartE2EDuration="17.121294607s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.102461538 +0000 UTC m=+850.702249397" lastFinishedPulling="2026-01-20 04:03:37.348869728 +0000 UTC m=+863.948657587" observedRunningTime="2026-01-20 04:03:39.119487831 +0000 UTC m=+865.719275690" watchObservedRunningTime="2026-01-20 04:03:39.121294607 +0000 UTC m=+865.721082466" Jan 20 04:03:39 crc kubenswrapper[4898]: I0120 04:03:39.156065 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk"] Jan 20 04:03:39 crc kubenswrapper[4898]: I0120 04:03:39.745133 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" event={"ID":"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb","Type":"ContainerStarted","Data":"581c0505298032b0afcb9cab78644888327a281d088d28ffebf6ed8781a0a854"} Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.453358 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-lvhpk" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.455963 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-tnrj8" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.460534 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-rcbnl" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.469703 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-mzmtp" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.470055 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5d87976b78-jpmbw" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.504941 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-7wx4b" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.505489 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhlsd" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.578757 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-2vrjf" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.663035 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mc2n6" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.687867 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-bzjfx" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.752145 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-wh8bc" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.761201 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-pvq64" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.809640 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-s42q8" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.838279 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-b4bkc" Jan 20 04:03:42 crc kubenswrapper[4898]: I0120 04:03:42.870967 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9wjw5" Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.825323 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" event={"ID":"56d05f5e-aa64-4ad6-94e0-aa14aa9317cb","Type":"ContainerStarted","Data":"e0343ee553a9d4e4fb0ba6a3046294165077134d97c6e73a47095a04a65c16c6"} Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.826202 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.827447 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" event={"ID":"0c3a8459-142e-4e4d-8546-220b3feec6ec","Type":"ContainerStarted","Data":"f70f03e3d53d37a2bf9d3bc27971ba6b71b3cfb97b799ef6aa2df62edf096a32"} Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.828090 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.829258 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" event={"ID":"e386e470-2db0-442a-8dd1-853ffe97e0f7","Type":"ContainerStarted","Data":"d853ae602235ff186c0ccaf143576f92908ee69a84d41ceeafd83fdb71f6ef2e"} Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.829499 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.834240 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" event={"ID":"dae6293c-0cae-4aff-a936-85ed72377a31","Type":"ContainerStarted","Data":"16e53dd3ae614c2c0ec07fc0e06d4e9f6734ddf5f1441b9134741acb1ecf8a95"} Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.834449 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.835643 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr" event={"ID":"0985373f-27d2-41cb-ba87-5d5845588c6b","Type":"ContainerStarted","Data":"986e623e3e62c1ff535ad268d5cce93a5a3a52426e813e3847e9d3783d957a4a"} Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.837063 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" event={"ID":"e40924a1-c172-4372-8adb-3919447c7207","Type":"ContainerStarted","Data":"694d35f1000faf173a721fc3c93c49d283ca49f846a9c2395815684f39034bd7"} Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.837319 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.864160 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" podStartSLOduration=21.567661626 podStartE2EDuration="25.864145157s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:39.182761871 +0000 UTC m=+865.782549720" lastFinishedPulling="2026-01-20 04:03:43.479245372 +0000 UTC m=+870.079033251" observedRunningTime="2026-01-20 04:03:47.861556065 +0000 UTC m=+874.461343924" watchObservedRunningTime="2026-01-20 04:03:47.864145157 +0000 UTC m=+874.463933016" Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.892037 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" podStartSLOduration=3.008081998 podStartE2EDuration="25.892018349s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.164396347 +0000 UTC m=+850.764184206" lastFinishedPulling="2026-01-20 04:03:47.048332658 +0000 UTC m=+873.648120557" observedRunningTime="2026-01-20 04:03:47.886099992 +0000 UTC m=+874.485887861" watchObservedRunningTime="2026-01-20 04:03:47.892018349 +0000 UTC m=+874.491806208" Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.906935 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" podStartSLOduration=3.102057239 podStartE2EDuration="25.906913389s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.184218774 +0000 UTC m=+850.784006633" lastFinishedPulling="2026-01-20 04:03:46.989074924 +0000 UTC m=+873.588862783" observedRunningTime="2026-01-20 04:03:47.901349704 +0000 UTC m=+874.501137563" watchObservedRunningTime="2026-01-20 04:03:47.906913389 +0000 UTC m=+874.506701248" Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.924020 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjksr" podStartSLOduration=3.065323358 podStartE2EDuration="25.92399456s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.19040803 +0000 UTC m=+850.790195889" lastFinishedPulling="2026-01-20 04:03:47.049079232 +0000 UTC m=+873.648867091" observedRunningTime="2026-01-20 04:03:47.916941987 +0000 UTC m=+874.516729836" watchObservedRunningTime="2026-01-20 04:03:47.92399456 +0000 UTC m=+874.523782419" Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.937311 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" podStartSLOduration=5.979465082 podStartE2EDuration="25.93730275s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.169499548 +0000 UTC m=+850.769287407" lastFinishedPulling="2026-01-20 04:03:44.127337186 +0000 UTC m=+870.727125075" observedRunningTime="2026-01-20 04:03:47.935746712 +0000 UTC m=+874.535534571" watchObservedRunningTime="2026-01-20 04:03:47.93730275 +0000 UTC m=+874.537090609" Jan 20 04:03:47 crc kubenswrapper[4898]: I0120 04:03:47.966471 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" podStartSLOduration=6.670514527 podStartE2EDuration="25.966434652s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:24.185313709 +0000 UTC m=+850.785101568" lastFinishedPulling="2026-01-20 04:03:43.481233824 +0000 UTC m=+870.081021693" observedRunningTime="2026-01-20 04:03:47.959676288 +0000 UTC m=+874.559464147" watchObservedRunningTime="2026-01-20 04:03:47.966434652 +0000 UTC m=+874.566222511" Jan 20 04:03:52 crc kubenswrapper[4898]: I0120 04:03:52.761991 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-5bnlz" Jan 20 04:03:52 crc kubenswrapper[4898]: I0120 04:03:52.994873 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-jfvrm" Jan 20 04:03:53 crc kubenswrapper[4898]: I0120 04:03:53.021131 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-74nht" Jan 20 04:03:53 crc kubenswrapper[4898]: I0120 04:03:53.040212 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-56hrd" Jan 20 04:03:54 crc kubenswrapper[4898]: I0120 04:03:54.021286 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:54 crc kubenswrapper[4898]: I0120 04:03:54.048060 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c88cac18-f08f-4ad2-8bf2-21d27972223a-cert\") pod \"infra-operator-controller-manager-77c48c7859-pzzk6\" (UID: \"c88cac18-f08f-4ad2-8bf2-21d27972223a\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:54 crc kubenswrapper[4898]: I0120 04:03:54.297756 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:03:54 crc kubenswrapper[4898]: I0120 04:03:54.529623 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6"] Jan 20 04:03:54 crc kubenswrapper[4898]: W0120 04:03:54.539169 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc88cac18_f08f_4ad2_8bf2_21d27972223a.slice/crio-d0c71cca18da031f00cad6492a830b1b7173d45c07dd0dcb214d9e07d8e73eae WatchSource:0}: Error finding container d0c71cca18da031f00cad6492a830b1b7173d45c07dd0dcb214d9e07d8e73eae: Status 404 returned error can't find the container with id d0c71cca18da031f00cad6492a830b1b7173d45c07dd0dcb214d9e07d8e73eae Jan 20 04:03:54 crc kubenswrapper[4898]: I0120 04:03:54.731035 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:54 crc kubenswrapper[4898]: I0120 04:03:54.731835 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:54 crc kubenswrapper[4898]: I0120 04:03:54.737652 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-metrics-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:54 crc kubenswrapper[4898]: I0120 04:03:54.737672 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6071d625-ea99-445e-a23c-31cf9e37b1f6-webhook-certs\") pod \"openstack-operator-controller-manager-6cbf4594b6-vxpqs\" (UID: \"6071d625-ea99-445e-a23c-31cf9e37b1f6\") " pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:54 crc kubenswrapper[4898]: I0120 04:03:54.895427 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" event={"ID":"c88cac18-f08f-4ad2-8bf2-21d27972223a","Type":"ContainerStarted","Data":"d0c71cca18da031f00cad6492a830b1b7173d45c07dd0dcb214d9e07d8e73eae"} Jan 20 04:03:54 crc kubenswrapper[4898]: I0120 04:03:54.914983 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:03:55 crc kubenswrapper[4898]: I0120 04:03:55.421643 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs"] Jan 20 04:04:03 crc kubenswrapper[4898]: I0120 04:03:55.903442 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" event={"ID":"6071d625-ea99-445e-a23c-31cf9e37b1f6","Type":"ContainerStarted","Data":"4904c723a78260b8d030ecce20a98bb0ed9586461ec8577ab76a63af71048995"} Jan 20 04:04:03 crc kubenswrapper[4898]: I0120 04:03:58.405263 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk" Jan 20 04:04:03 crc kubenswrapper[4898]: I0120 04:04:02.956951 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" event={"ID":"6071d625-ea99-445e-a23c-31cf9e37b1f6","Type":"ContainerStarted","Data":"112bae2762f6b8096b73eb95c09c4b61c65cda82b235a13dc94fc2d056f8fe91"} Jan 20 04:04:03 crc kubenswrapper[4898]: I0120 04:04:02.957515 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:04:03 crc kubenswrapper[4898]: I0120 04:04:02.987238 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" podStartSLOduration=40.987215843 podStartE2EDuration="40.987215843s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:04:02.98081748 +0000 UTC m=+889.580605339" watchObservedRunningTime="2026-01-20 04:04:02.987215843 +0000 UTC m=+889.587003742" Jan 20 04:04:04 crc kubenswrapper[4898]: I0120 04:04:04.986162 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" event={"ID":"c88cac18-f08f-4ad2-8bf2-21d27972223a","Type":"ContainerStarted","Data":"3e26ae6f454f5caf7a4a034e8004dc1c5547ba45fc8cde24cca4de4f08cec6e2"} Jan 20 04:04:04 crc kubenswrapper[4898]: I0120 04:04:04.986985 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:04:14 crc kubenswrapper[4898]: I0120 04:04:14.305495 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" Jan 20 04:04:14 crc kubenswrapper[4898]: I0120 04:04:14.321575 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-pzzk6" podStartSLOduration=42.308958741 podStartE2EDuration="52.321558525s" podCreationTimestamp="2026-01-20 04:03:22 +0000 UTC" firstStartedPulling="2026-01-20 04:03:54.543945306 +0000 UTC m=+881.143733165" lastFinishedPulling="2026-01-20 04:04:04.55654509 +0000 UTC m=+891.156332949" observedRunningTime="2026-01-20 04:04:05.008629897 +0000 UTC m=+891.608417756" watchObservedRunningTime="2026-01-20 04:04:14.321558525 +0000 UTC m=+900.921346384" Jan 20 04:04:14 crc kubenswrapper[4898]: I0120 04:04:14.926505 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6cbf4594b6-vxpqs" Jan 20 04:04:30 crc kubenswrapper[4898]: I0120 04:04:30.978642 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lh54w"] Jan 20 04:04:30 crc kubenswrapper[4898]: I0120 04:04:30.983413 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" Jan 20 04:04:30 crc kubenswrapper[4898]: I0120 04:04:30.985066 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 20 04:04:30 crc kubenswrapper[4898]: I0120 04:04:30.986592 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 20 04:04:30 crc kubenswrapper[4898]: I0120 04:04:30.986913 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-npz58" Jan 20 04:04:30 crc kubenswrapper[4898]: I0120 04:04:30.987155 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.005302 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lh54w"] Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.069682 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xdcqs"] Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.071533 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.073996 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.079217 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xdcqs"] Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.175077 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c04e8aa-7289-4055-8c32-1fcd6c04f510-config\") pod \"dnsmasq-dns-675f4bcbfc-lh54w\" (UID: \"7c04e8aa-7289-4055-8c32-1fcd6c04f510\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.175169 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8xv\" (UniqueName: \"kubernetes.io/projected/7c04e8aa-7289-4055-8c32-1fcd6c04f510-kube-api-access-5j8xv\") pod \"dnsmasq-dns-675f4bcbfc-lh54w\" (UID: \"7c04e8aa-7289-4055-8c32-1fcd6c04f510\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.175199 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-config\") pod \"dnsmasq-dns-78dd6ddcc-xdcqs\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.175238 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xdcqs\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.175269 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtqgm\" (UniqueName: \"kubernetes.io/projected/c831765e-262d-4068-9411-8bbf3bbd0190-kube-api-access-rtqgm\") pod \"dnsmasq-dns-78dd6ddcc-xdcqs\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.277502 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j8xv\" (UniqueName: \"kubernetes.io/projected/7c04e8aa-7289-4055-8c32-1fcd6c04f510-kube-api-access-5j8xv\") pod \"dnsmasq-dns-675f4bcbfc-lh54w\" (UID: \"7c04e8aa-7289-4055-8c32-1fcd6c04f510\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.277561 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-config\") pod \"dnsmasq-dns-78dd6ddcc-xdcqs\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.277605 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xdcqs\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.277634 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtqgm\" (UniqueName: \"kubernetes.io/projected/c831765e-262d-4068-9411-8bbf3bbd0190-kube-api-access-rtqgm\") pod \"dnsmasq-dns-78dd6ddcc-xdcqs\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.277682 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c04e8aa-7289-4055-8c32-1fcd6c04f510-config\") pod \"dnsmasq-dns-675f4bcbfc-lh54w\" (UID: \"7c04e8aa-7289-4055-8c32-1fcd6c04f510\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.278920 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c04e8aa-7289-4055-8c32-1fcd6c04f510-config\") pod \"dnsmasq-dns-675f4bcbfc-lh54w\" (UID: \"7c04e8aa-7289-4055-8c32-1fcd6c04f510\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.280216 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-config\") pod \"dnsmasq-dns-78dd6ddcc-xdcqs\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.280758 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xdcqs\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.302929 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j8xv\" (UniqueName: \"kubernetes.io/projected/7c04e8aa-7289-4055-8c32-1fcd6c04f510-kube-api-access-5j8xv\") pod \"dnsmasq-dns-675f4bcbfc-lh54w\" (UID: \"7c04e8aa-7289-4055-8c32-1fcd6c04f510\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.305454 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtqgm\" (UniqueName: \"kubernetes.io/projected/c831765e-262d-4068-9411-8bbf3bbd0190-kube-api-access-rtqgm\") pod \"dnsmasq-dns-78dd6ddcc-xdcqs\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.308860 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.404420 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.766271 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lh54w"] Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.772868 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 04:04:31 crc kubenswrapper[4898]: I0120 04:04:31.824037 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xdcqs"] Jan 20 04:04:31 crc kubenswrapper[4898]: W0120 04:04:31.828637 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc831765e_262d_4068_9411_8bbf3bbd0190.slice/crio-07dd99b3d5d29575fb83073e8c8a9e8a03c493b338e4b8647db42ae4fcd8f28b WatchSource:0}: Error finding container 07dd99b3d5d29575fb83073e8c8a9e8a03c493b338e4b8647db42ae4fcd8f28b: Status 404 returned error can't find the container with id 07dd99b3d5d29575fb83073e8c8a9e8a03c493b338e4b8647db42ae4fcd8f28b Jan 20 04:04:32 crc kubenswrapper[4898]: I0120 04:04:32.224219 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" event={"ID":"7c04e8aa-7289-4055-8c32-1fcd6c04f510","Type":"ContainerStarted","Data":"0399470f1035d2c21cbd91f6909abd2e102c917d3742a5884ecf0f8611ab8ab7"} Jan 20 04:04:32 crc kubenswrapper[4898]: I0120 04:04:32.225159 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" event={"ID":"c831765e-262d-4068-9411-8bbf3bbd0190","Type":"ContainerStarted","Data":"07dd99b3d5d29575fb83073e8c8a9e8a03c493b338e4b8647db42ae4fcd8f28b"} Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.711418 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lh54w"] Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.740667 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fxbxb"] Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.741888 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.751636 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fxbxb"] Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.828402 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-config\") pod \"dnsmasq-dns-666b6646f7-fxbxb\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.828743 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r5px\" (UniqueName: \"kubernetes.io/projected/191ea2a9-3d41-4a1a-a805-f797900d51c1-kube-api-access-5r5px\") pod \"dnsmasq-dns-666b6646f7-fxbxb\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.828784 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fxbxb\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.931237 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-config\") pod \"dnsmasq-dns-666b6646f7-fxbxb\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.931300 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r5px\" (UniqueName: \"kubernetes.io/projected/191ea2a9-3d41-4a1a-a805-f797900d51c1-kube-api-access-5r5px\") pod \"dnsmasq-dns-666b6646f7-fxbxb\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.931325 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fxbxb\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.932191 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fxbxb\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.932707 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-config\") pod \"dnsmasq-dns-666b6646f7-fxbxb\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.956040 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r5px\" (UniqueName: \"kubernetes.io/projected/191ea2a9-3d41-4a1a-a805-f797900d51c1-kube-api-access-5r5px\") pod \"dnsmasq-dns-666b6646f7-fxbxb\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:33 crc kubenswrapper[4898]: I0120 04:04:33.980082 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xdcqs"] Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.000998 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n7gk8"] Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.003753 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.021003 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n7gk8"] Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.035542 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzcv7\" (UniqueName: \"kubernetes.io/projected/448402be-0300-4347-915e-f3a209c414e4-kube-api-access-nzcv7\") pod \"dnsmasq-dns-57d769cc4f-n7gk8\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.035614 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-config\") pod \"dnsmasq-dns-57d769cc4f-n7gk8\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.035779 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-n7gk8\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.067913 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.142297 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzcv7\" (UniqueName: \"kubernetes.io/projected/448402be-0300-4347-915e-f3a209c414e4-kube-api-access-nzcv7\") pod \"dnsmasq-dns-57d769cc4f-n7gk8\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.142393 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-config\") pod \"dnsmasq-dns-57d769cc4f-n7gk8\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.142555 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-n7gk8\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.144538 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-config\") pod \"dnsmasq-dns-57d769cc4f-n7gk8\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.145548 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-n7gk8\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.165641 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzcv7\" (UniqueName: \"kubernetes.io/projected/448402be-0300-4347-915e-f3a209c414e4-kube-api-access-nzcv7\") pod \"dnsmasq-dns-57d769cc4f-n7gk8\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.333314 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.578011 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fxbxb"] Jan 20 04:04:34 crc kubenswrapper[4898]: W0120 04:04:34.591699 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod191ea2a9_3d41_4a1a_a805_f797900d51c1.slice/crio-46130267ec1459107e7d5127b3af67ac1674f1b8e571a29323713f372821529c WatchSource:0}: Error finding container 46130267ec1459107e7d5127b3af67ac1674f1b8e571a29323713f372821529c: Status 404 returned error can't find the container with id 46130267ec1459107e7d5127b3af67ac1674f1b8e571a29323713f372821529c Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.863574 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.865463 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.875978 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.876529 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.876644 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.876930 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.877059 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l2nfw" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.877063 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.877186 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.886073 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.899495 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n7gk8"] Jan 20 04:04:34 crc kubenswrapper[4898]: W0120 04:04:34.909407 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod448402be_0300_4347_915e_f3a209c414e4.slice/crio-8df9d75fdc1a379dbd9708fff0782cc1a7888ab0a3187b50026989cf51c6bd23 WatchSource:0}: Error finding container 8df9d75fdc1a379dbd9708fff0782cc1a7888ab0a3187b50026989cf51c6bd23: Status 404 returned error can't find the container with id 8df9d75fdc1a379dbd9708fff0782cc1a7888ab0a3187b50026989cf51c6bd23 Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.960415 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.960485 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.960527 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.960561 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.960600 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6sl6\" (UniqueName: \"kubernetes.io/projected/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-kube-api-access-d6sl6\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.960623 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.960643 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.960664 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.960683 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.960721 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-config-data\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:34 crc kubenswrapper[4898]: I0120 04:04:34.960749 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.062284 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-config-data\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.062332 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.062415 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.062459 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.062483 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.062519 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.062538 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6sl6\" (UniqueName: \"kubernetes.io/projected/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-kube-api-access-d6sl6\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.062558 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.062578 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.062601 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.062620 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.063409 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.063861 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.063959 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.064032 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.064288 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.068717 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.073122 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.078615 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.079142 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-config-data\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.083684 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6sl6\" (UniqueName: \"kubernetes.io/projected/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-kube-api-access-d6sl6\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.085707 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.107928 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48\") " pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.130504 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.136139 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.148298 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.149465 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.150009 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.150933 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.151124 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.151222 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.151308 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r7l9j" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.152083 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.163174 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2xb\" (UniqueName: \"kubernetes.io/projected/a1f422f4-afd1-4794-85b1-cb82712e004a-kube-api-access-nn2xb\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.163224 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.163250 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.163455 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1f422f4-afd1-4794-85b1-cb82712e004a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.163561 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.163793 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.163874 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1f422f4-afd1-4794-85b1-cb82712e004a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.163919 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1f422f4-afd1-4794-85b1-cb82712e004a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.163963 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1f422f4-afd1-4794-85b1-cb82712e004a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.164024 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1f422f4-afd1-4794-85b1-cb82712e004a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.164056 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.201768 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267238 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267289 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267347 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1f422f4-afd1-4794-85b1-cb82712e004a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267383 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267475 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267513 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1f422f4-afd1-4794-85b1-cb82712e004a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267537 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1f422f4-afd1-4794-85b1-cb82712e004a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267531 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267567 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1f422f4-afd1-4794-85b1-cb82712e004a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267608 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1f422f4-afd1-4794-85b1-cb82712e004a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267623 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.267660 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn2xb\" (UniqueName: \"kubernetes.io/projected/a1f422f4-afd1-4794-85b1-cb82712e004a-kube-api-access-nn2xb\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.269363 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1f422f4-afd1-4794-85b1-cb82712e004a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.269900 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1f422f4-afd1-4794-85b1-cb82712e004a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.271349 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1f422f4-afd1-4794-85b1-cb82712e004a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.273197 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.273687 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.275059 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1f422f4-afd1-4794-85b1-cb82712e004a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.284863 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" event={"ID":"448402be-0300-4347-915e-f3a209c414e4","Type":"ContainerStarted","Data":"8df9d75fdc1a379dbd9708fff0782cc1a7888ab0a3187b50026989cf51c6bd23"} Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.287040 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" event={"ID":"191ea2a9-3d41-4a1a-a805-f797900d51c1","Type":"ContainerStarted","Data":"46130267ec1459107e7d5127b3af67ac1674f1b8e571a29323713f372821529c"} Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.292140 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn2xb\" (UniqueName: \"kubernetes.io/projected/a1f422f4-afd1-4794-85b1-cb82712e004a-kube-api-access-nn2xb\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.297385 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1f422f4-afd1-4794-85b1-cb82712e004a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.298120 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.301876 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1f422f4-afd1-4794-85b1-cb82712e004a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.308871 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1f422f4-afd1-4794-85b1-cb82712e004a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:35 crc kubenswrapper[4898]: I0120 04:04:35.485887 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.375577 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.377218 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.380628 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.380652 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hndj7" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.381268 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.383043 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.384225 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.391735 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.509957 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.510018 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.510060 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.510081 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g7jp\" (UniqueName: \"kubernetes.io/projected/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-kube-api-access-2g7jp\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.510944 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.511073 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.511210 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-kolla-config\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.511245 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-config-data-default\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.612793 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.612845 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.612883 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.612900 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g7jp\" (UniqueName: \"kubernetes.io/projected/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-kube-api-access-2g7jp\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.612951 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.612969 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.613006 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-kolla-config\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.613021 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-config-data-default\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.614022 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.635379 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-config-data-default\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.636268 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.636299 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.641945 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.642721 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-kolla-config\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.648578 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.666197 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.668815 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g7jp\" (UniqueName: \"kubernetes.io/projected/ec4f8f5c-5a5e-4c01-a81a-567a6e62176d-kube-api-access-2g7jp\") pod \"openstack-galera-0\" (UID: \"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d\") " pod="openstack/openstack-galera-0" Jan 20 04:04:36 crc kubenswrapper[4898]: I0120 04:04:36.709938 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 20 04:04:37 crc kubenswrapper[4898]: I0120 04:04:37.872831 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 04:04:37 crc kubenswrapper[4898]: I0120 04:04:37.875075 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:37 crc kubenswrapper[4898]: I0120 04:04:37.878878 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 20 04:04:37 crc kubenswrapper[4898]: I0120 04:04:37.881527 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-2kc6g" Jan 20 04:04:37 crc kubenswrapper[4898]: I0120 04:04:37.881795 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 20 04:04:37 crc kubenswrapper[4898]: I0120 04:04:37.882067 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 20 04:04:37 crc kubenswrapper[4898]: I0120 04:04:37.894612 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.042117 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f726d262-f94d-4ff3-a4ae-a51076898b72-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.042515 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f726d262-f94d-4ff3-a4ae-a51076898b72-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.042555 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.042588 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f726d262-f94d-4ff3-a4ae-a51076898b72-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.042612 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zp6h\" (UniqueName: \"kubernetes.io/projected/f726d262-f94d-4ff3-a4ae-a51076898b72-kube-api-access-5zp6h\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.042641 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f726d262-f94d-4ff3-a4ae-a51076898b72-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.042656 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f726d262-f94d-4ff3-a4ae-a51076898b72-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.043052 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f726d262-f94d-4ff3-a4ae-a51076898b72-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.145128 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f726d262-f94d-4ff3-a4ae-a51076898b72-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.145170 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f726d262-f94d-4ff3-a4ae-a51076898b72-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.145246 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f726d262-f94d-4ff3-a4ae-a51076898b72-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.145294 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f726d262-f94d-4ff3-a4ae-a51076898b72-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.145313 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f726d262-f94d-4ff3-a4ae-a51076898b72-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.145333 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.145352 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f726d262-f94d-4ff3-a4ae-a51076898b72-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.145373 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zp6h\" (UniqueName: \"kubernetes.io/projected/f726d262-f94d-4ff3-a4ae-a51076898b72-kube-api-access-5zp6h\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.146061 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.146160 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f726d262-f94d-4ff3-a4ae-a51076898b72-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.147332 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f726d262-f94d-4ff3-a4ae-a51076898b72-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.147475 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f726d262-f94d-4ff3-a4ae-a51076898b72-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.148783 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f726d262-f94d-4ff3-a4ae-a51076898b72-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.151893 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f726d262-f94d-4ff3-a4ae-a51076898b72-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.155046 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f726d262-f94d-4ff3-a4ae-a51076898b72-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.169766 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zp6h\" (UniqueName: \"kubernetes.io/projected/f726d262-f94d-4ff3-a4ae-a51076898b72-kube-api-access-5zp6h\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.171304 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f726d262-f94d-4ff3-a4ae-a51076898b72\") " pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.218646 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.255033 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.255936 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.264621 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.264970 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qwmls" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.268302 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.289058 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.348129 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f47b2f9-88d3-43e4-9f9c-da4340a63519-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.348172 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bncmz\" (UniqueName: \"kubernetes.io/projected/7f47b2f9-88d3-43e4-9f9c-da4340a63519-kube-api-access-bncmz\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.348209 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f47b2f9-88d3-43e4-9f9c-da4340a63519-config-data\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.348245 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7f47b2f9-88d3-43e4-9f9c-da4340a63519-kolla-config\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.348301 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f47b2f9-88d3-43e4-9f9c-da4340a63519-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.449645 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f47b2f9-88d3-43e4-9f9c-da4340a63519-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.449699 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f47b2f9-88d3-43e4-9f9c-da4340a63519-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.449729 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bncmz\" (UniqueName: \"kubernetes.io/projected/7f47b2f9-88d3-43e4-9f9c-da4340a63519-kube-api-access-bncmz\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.449763 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f47b2f9-88d3-43e4-9f9c-da4340a63519-config-data\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.449794 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7f47b2f9-88d3-43e4-9f9c-da4340a63519-kolla-config\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.450658 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7f47b2f9-88d3-43e4-9f9c-da4340a63519-kolla-config\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.450877 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f47b2f9-88d3-43e4-9f9c-da4340a63519-config-data\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.453464 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f47b2f9-88d3-43e4-9f9c-da4340a63519-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.458896 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f47b2f9-88d3-43e4-9f9c-da4340a63519-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.491931 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bncmz\" (UniqueName: \"kubernetes.io/projected/7f47b2f9-88d3-43e4-9f9c-da4340a63519-kube-api-access-bncmz\") pod \"memcached-0\" (UID: \"7f47b2f9-88d3-43e4-9f9c-da4340a63519\") " pod="openstack/memcached-0" Jan 20 04:04:38 crc kubenswrapper[4898]: I0120 04:04:38.572749 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 20 04:04:40 crc kubenswrapper[4898]: I0120 04:04:40.632936 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 04:04:40 crc kubenswrapper[4898]: I0120 04:04:40.635566 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 04:04:40 crc kubenswrapper[4898]: I0120 04:04:40.649272 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 04:04:40 crc kubenswrapper[4898]: I0120 04:04:40.650482 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-72wbw" Jan 20 04:04:40 crc kubenswrapper[4898]: I0120 04:04:40.788967 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg8ft\" (UniqueName: \"kubernetes.io/projected/d881812d-76d9-4618-8e72-815f0d9571f5-kube-api-access-jg8ft\") pod \"kube-state-metrics-0\" (UID: \"d881812d-76d9-4618-8e72-815f0d9571f5\") " pod="openstack/kube-state-metrics-0" Jan 20 04:04:40 crc kubenswrapper[4898]: I0120 04:04:40.890169 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg8ft\" (UniqueName: \"kubernetes.io/projected/d881812d-76d9-4618-8e72-815f0d9571f5-kube-api-access-jg8ft\") pod \"kube-state-metrics-0\" (UID: \"d881812d-76d9-4618-8e72-815f0d9571f5\") " pod="openstack/kube-state-metrics-0" Jan 20 04:04:40 crc kubenswrapper[4898]: I0120 04:04:40.911938 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg8ft\" (UniqueName: \"kubernetes.io/projected/d881812d-76d9-4618-8e72-815f0d9571f5-kube-api-access-jg8ft\") pod \"kube-state-metrics-0\" (UID: \"d881812d-76d9-4618-8e72-815f0d9571f5\") " pod="openstack/kube-state-metrics-0" Jan 20 04:04:40 crc kubenswrapper[4898]: I0120 04:04:40.953981 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.276418 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.278680 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.279386 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.285636 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.285819 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.287005 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.287399 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.287474 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-hncnw" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.358532 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.359127 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a20df64-f80d-4506-bcf4-2cdcc1eee607-config\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.359211 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjwl8\" (UniqueName: \"kubernetes.io/projected/8a20df64-f80d-4506-bcf4-2cdcc1eee607-kube-api-access-jjwl8\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.359278 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a20df64-f80d-4506-bcf4-2cdcc1eee607-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.367206 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a20df64-f80d-4506-bcf4-2cdcc1eee607-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.367260 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a20df64-f80d-4506-bcf4-2cdcc1eee607-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.367305 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a20df64-f80d-4506-bcf4-2cdcc1eee607-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.367365 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a20df64-f80d-4506-bcf4-2cdcc1eee607-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.468618 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjwl8\" (UniqueName: \"kubernetes.io/projected/8a20df64-f80d-4506-bcf4-2cdcc1eee607-kube-api-access-jjwl8\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.468721 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a20df64-f80d-4506-bcf4-2cdcc1eee607-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.468767 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a20df64-f80d-4506-bcf4-2cdcc1eee607-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.468789 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a20df64-f80d-4506-bcf4-2cdcc1eee607-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.468816 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a20df64-f80d-4506-bcf4-2cdcc1eee607-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.468849 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a20df64-f80d-4506-bcf4-2cdcc1eee607-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.468903 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.468926 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a20df64-f80d-4506-bcf4-2cdcc1eee607-config\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.469379 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.469914 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a20df64-f80d-4506-bcf4-2cdcc1eee607-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.470641 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a20df64-f80d-4506-bcf4-2cdcc1eee607-config\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.471632 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a20df64-f80d-4506-bcf4-2cdcc1eee607-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.478484 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a20df64-f80d-4506-bcf4-2cdcc1eee607-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.478813 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a20df64-f80d-4506-bcf4-2cdcc1eee607-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.485406 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a20df64-f80d-4506-bcf4-2cdcc1eee607-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.496162 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjwl8\" (UniqueName: \"kubernetes.io/projected/8a20df64-f80d-4506-bcf4-2cdcc1eee607-kube-api-access-jjwl8\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.499120 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8a20df64-f80d-4506-bcf4-2cdcc1eee607\") " pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:44 crc kubenswrapper[4898]: I0120 04:04:44.624721 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 20 04:04:45 crc kubenswrapper[4898]: I0120 04:04:45.441163 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 20 04:04:45 crc kubenswrapper[4898]: I0120 04:04:45.902966 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ln6nh"] Jan 20 04:04:45 crc kubenswrapper[4898]: I0120 04:04:45.905057 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:45 crc kubenswrapper[4898]: I0120 04:04:45.908874 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 20 04:04:45 crc kubenswrapper[4898]: I0120 04:04:45.909066 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qxfh2" Jan 20 04:04:45 crc kubenswrapper[4898]: I0120 04:04:45.909173 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 20 04:04:45 crc kubenswrapper[4898]: I0120 04:04:45.917054 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9mdd5"] Jan 20 04:04:45 crc kubenswrapper[4898]: I0120 04:04:45.919460 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:45 crc kubenswrapper[4898]: I0120 04:04:45.927187 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ln6nh"] Jan 20 04:04:45 crc kubenswrapper[4898]: I0120 04:04:45.937979 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9mdd5"] Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:45.999420 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a6903c19-3320-443c-8713-105a39a65527-var-run-ovn\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:45.999561 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6903c19-3320-443c-8713-105a39a65527-ovn-controller-tls-certs\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:45.999800 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6903c19-3320-443c-8713-105a39a65527-scripts\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:45.999888 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-var-run\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:45.999916 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6903c19-3320-443c-8713-105a39a65527-combined-ca-bundle\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:45.999932 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlzgm\" (UniqueName: \"kubernetes.io/projected/a6903c19-3320-443c-8713-105a39a65527-kube-api-access-jlzgm\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.000129 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-scripts\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.000186 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a6903c19-3320-443c-8713-105a39a65527-var-log-ovn\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.000276 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a6903c19-3320-443c-8713-105a39a65527-var-run\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.000313 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-var-lib\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.000602 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bn29\" (UniqueName: \"kubernetes.io/projected/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-kube-api-access-5bn29\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.000670 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-var-log\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.000713 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-etc-ovs\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.102868 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-scripts\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.102937 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a6903c19-3320-443c-8713-105a39a65527-var-log-ovn\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.102975 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a6903c19-3320-443c-8713-105a39a65527-var-run\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.102995 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-var-lib\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.103052 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bn29\" (UniqueName: \"kubernetes.io/projected/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-kube-api-access-5bn29\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.103076 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-var-log\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.103100 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-etc-ovs\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.103132 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a6903c19-3320-443c-8713-105a39a65527-var-run-ovn\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.103155 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6903c19-3320-443c-8713-105a39a65527-ovn-controller-tls-certs\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.103207 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6903c19-3320-443c-8713-105a39a65527-scripts\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.103235 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-var-run\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.103257 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6903c19-3320-443c-8713-105a39a65527-combined-ca-bundle\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.103279 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlzgm\" (UniqueName: \"kubernetes.io/projected/a6903c19-3320-443c-8713-105a39a65527-kube-api-access-jlzgm\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.104045 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a6903c19-3320-443c-8713-105a39a65527-var-run\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.104082 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-var-run\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.104068 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-var-log\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.104067 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a6903c19-3320-443c-8713-105a39a65527-var-log-ovn\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.104165 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-etc-ovs\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.104264 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a6903c19-3320-443c-8713-105a39a65527-var-run-ovn\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.105780 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6903c19-3320-443c-8713-105a39a65527-scripts\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.106863 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-scripts\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.107066 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-var-lib\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.110822 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6903c19-3320-443c-8713-105a39a65527-combined-ca-bundle\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.121314 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlzgm\" (UniqueName: \"kubernetes.io/projected/a6903c19-3320-443c-8713-105a39a65527-kube-api-access-jlzgm\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.127016 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bn29\" (UniqueName: \"kubernetes.io/projected/ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef-kube-api-access-5bn29\") pod \"ovn-controller-ovs-9mdd5\" (UID: \"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef\") " pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.127942 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6903c19-3320-443c-8713-105a39a65527-ovn-controller-tls-certs\") pod \"ovn-controller-ln6nh\" (UID: \"a6903c19-3320-443c-8713-105a39a65527\") " pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.231475 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ln6nh" Jan 20 04:04:46 crc kubenswrapper[4898]: I0120 04:04:46.248489 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.775263 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.779776 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.784491 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.784961 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-5d2cq" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.786022 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.788342 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.788551 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.838242 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.838362 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.838390 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-config\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.838408 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j44jb\" (UniqueName: \"kubernetes.io/projected/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-kube-api-access-j44jb\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.838560 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.838600 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.838641 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.838676 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.940716 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.940764 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.940795 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.940823 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.940866 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.940919 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.940947 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-config\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.940966 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j44jb\" (UniqueName: \"kubernetes.io/projected/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-kube-api-access-j44jb\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.941542 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.943067 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.943315 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-config\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.944618 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.948952 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.949043 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.955045 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.955400 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j44jb\" (UniqueName: \"kubernetes.io/projected/82d86080-ab0b-4b48-9847-ead3c4bcc6c4-kube-api-access-j44jb\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:47 crc kubenswrapper[4898]: I0120 04:04:47.966700 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"82d86080-ab0b-4b48-9847-ead3c4bcc6c4\") " pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:48 crc kubenswrapper[4898]: I0120 04:04:48.119380 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 20 04:04:50 crc kubenswrapper[4898]: I0120 04:04:50.422700 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7f47b2f9-88d3-43e4-9f9c-da4340a63519","Type":"ContainerStarted","Data":"b4e04eca7cfff89b4a72e1d4e186ae173c827c6f01df04d3bf989b715526e689"} Jan 20 04:04:50 crc kubenswrapper[4898]: I0120 04:04:50.854197 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 20 04:04:50 crc kubenswrapper[4898]: I0120 04:04:50.912992 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 04:04:51 crc kubenswrapper[4898]: W0120 04:04:51.322069 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec4f8f5c_5a5e_4c01_a81a_567a6e62176d.slice/crio-ef3c82c66c734d383d2b81d7638e7b9d4393574cfe50f2b639a40343dc3d471e WatchSource:0}: Error finding container ef3c82c66c734d383d2b81d7638e7b9d4393574cfe50f2b639a40343dc3d471e: Status 404 returned error can't find the container with id ef3c82c66c734d383d2b81d7638e7b9d4393574cfe50f2b639a40343dc3d471e Jan 20 04:04:51 crc kubenswrapper[4898]: E0120 04:04:51.348622 4898 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 04:04:51 crc kubenswrapper[4898]: E0120 04:04:51.348809 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtqgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-xdcqs_openstack(c831765e-262d-4068-9411-8bbf3bbd0190): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 04:04:51 crc kubenswrapper[4898]: E0120 04:04:51.350416 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" podUID="c831765e-262d-4068-9411-8bbf3bbd0190" Jan 20 04:04:51 crc kubenswrapper[4898]: E0120 04:04:51.380065 4898 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 04:04:51 crc kubenswrapper[4898]: E0120 04:04:51.380235 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j8xv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-lh54w_openstack(7c04e8aa-7289-4055-8c32-1fcd6c04f510): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 04:04:51 crc kubenswrapper[4898]: E0120 04:04:51.381739 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" podUID="7c04e8aa-7289-4055-8c32-1fcd6c04f510" Jan 20 04:04:51 crc kubenswrapper[4898]: I0120 04:04:51.438702 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d","Type":"ContainerStarted","Data":"ef3c82c66c734d383d2b81d7638e7b9d4393574cfe50f2b639a40343dc3d471e"} Jan 20 04:04:51 crc kubenswrapper[4898]: I0120 04:04:51.464600 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48","Type":"ContainerStarted","Data":"2fd45829a8fdc6930f8b7d53a83451f22cbf7d6ba259028fcaab55e9c2d397f0"} Jan 20 04:04:51 crc kubenswrapper[4898]: I0120 04:04:51.892494 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" Jan 20 04:04:51 crc kubenswrapper[4898]: I0120 04:04:51.927845 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 04:04:51 crc kubenswrapper[4898]: I0120 04:04:51.948116 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.014664 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c04e8aa-7289-4055-8c32-1fcd6c04f510-config\") pod \"7c04e8aa-7289-4055-8c32-1fcd6c04f510\" (UID: \"7c04e8aa-7289-4055-8c32-1fcd6c04f510\") " Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.015013 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j8xv\" (UniqueName: \"kubernetes.io/projected/7c04e8aa-7289-4055-8c32-1fcd6c04f510-kube-api-access-5j8xv\") pod \"7c04e8aa-7289-4055-8c32-1fcd6c04f510\" (UID: \"7c04e8aa-7289-4055-8c32-1fcd6c04f510\") " Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.016104 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c04e8aa-7289-4055-8c32-1fcd6c04f510-config" (OuterVolumeSpecName: "config") pod "7c04e8aa-7289-4055-8c32-1fcd6c04f510" (UID: "7c04e8aa-7289-4055-8c32-1fcd6c04f510"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.020715 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c04e8aa-7289-4055-8c32-1fcd6c04f510-kube-api-access-5j8xv" (OuterVolumeSpecName: "kube-api-access-5j8xv") pod "7c04e8aa-7289-4055-8c32-1fcd6c04f510" (UID: "7c04e8aa-7289-4055-8c32-1fcd6c04f510"). InnerVolumeSpecName "kube-api-access-5j8xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.036641 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.041546 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.122422 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j8xv\" (UniqueName: \"kubernetes.io/projected/7c04e8aa-7289-4055-8c32-1fcd6c04f510-kube-api-access-5j8xv\") on node \"crc\" DevicePath \"\"" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.122461 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c04e8aa-7289-4055-8c32-1fcd6c04f510-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.227214 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-config\") pod \"c831765e-262d-4068-9411-8bbf3bbd0190\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.227360 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-dns-svc\") pod \"c831765e-262d-4068-9411-8bbf3bbd0190\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.227393 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtqgm\" (UniqueName: \"kubernetes.io/projected/c831765e-262d-4068-9411-8bbf3bbd0190-kube-api-access-rtqgm\") pod \"c831765e-262d-4068-9411-8bbf3bbd0190\" (UID: \"c831765e-262d-4068-9411-8bbf3bbd0190\") " Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.229141 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c831765e-262d-4068-9411-8bbf3bbd0190" (UID: "c831765e-262d-4068-9411-8bbf3bbd0190"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.229318 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-config" (OuterVolumeSpecName: "config") pod "c831765e-262d-4068-9411-8bbf3bbd0190" (UID: "c831765e-262d-4068-9411-8bbf3bbd0190"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.232807 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c831765e-262d-4068-9411-8bbf3bbd0190-kube-api-access-rtqgm" (OuterVolumeSpecName: "kube-api-access-rtqgm") pod "c831765e-262d-4068-9411-8bbf3bbd0190" (UID: "c831765e-262d-4068-9411-8bbf3bbd0190"). InnerVolumeSpecName "kube-api-access-rtqgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.246047 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ln6nh"] Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.329355 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.329625 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtqgm\" (UniqueName: \"kubernetes.io/projected/c831765e-262d-4068-9411-8bbf3bbd0190-kube-api-access-rtqgm\") on node \"crc\" DevicePath \"\"" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.329638 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c831765e-262d-4068-9411-8bbf3bbd0190-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.332395 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 04:04:52 crc kubenswrapper[4898]: W0120 04:04:52.343113 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82d86080_ab0b_4b48_9847_ead3c4bcc6c4.slice/crio-f303be99fb56088068975a708625d9960aa1c81d3fe30096efaa1e63910495a6 WatchSource:0}: Error finding container f303be99fb56088068975a708625d9960aa1c81d3fe30096efaa1e63910495a6: Status 404 returned error can't find the container with id f303be99fb56088068975a708625d9960aa1c81d3fe30096efaa1e63910495a6 Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.388174 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9mdd5"] Jan 20 04:04:52 crc kubenswrapper[4898]: W0120 04:04:52.391311 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded6e242c_cbda_4cd8_a1e9_9c9f7c3b75ef.slice/crio-e7fc25cd24985b1a86551c802fd8c2af917c59eff25ec0e8648a0e457d4a41b5 WatchSource:0}: Error finding container e7fc25cd24985b1a86551c802fd8c2af917c59eff25ec0e8648a0e457d4a41b5: Status 404 returned error can't find the container with id e7fc25cd24985b1a86551c802fd8c2af917c59eff25ec0e8648a0e457d4a41b5 Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.473557 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"82d86080-ab0b-4b48-9847-ead3c4bcc6c4","Type":"ContainerStarted","Data":"f303be99fb56088068975a708625d9960aa1c81d3fe30096efaa1e63910495a6"} Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.475843 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9mdd5" event={"ID":"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef","Type":"ContainerStarted","Data":"e7fc25cd24985b1a86551c802fd8c2af917c59eff25ec0e8648a0e457d4a41b5"} Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.478710 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ln6nh" event={"ID":"a6903c19-3320-443c-8713-105a39a65527","Type":"ContainerStarted","Data":"eb0a207040acd3267904f3b7125aab0fe9622856ca6ebff718118e885f9d25d1"} Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.481052 4898 generic.go:334] "Generic (PLEG): container finished" podID="191ea2a9-3d41-4a1a-a805-f797900d51c1" containerID="11bb3c93817ffacadd5d296f5a171ce68a89d32a4d1242479e10c9d58a5f230d" exitCode=0 Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.481135 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" event={"ID":"191ea2a9-3d41-4a1a-a805-f797900d51c1","Type":"ContainerDied","Data":"11bb3c93817ffacadd5d296f5a171ce68a89d32a4d1242479e10c9d58a5f230d"} Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.482527 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d881812d-76d9-4618-8e72-815f0d9571f5","Type":"ContainerStarted","Data":"84f98aab2fb267bf68fb413933524c8c92de45281d548f3f4039d42e5514c243"} Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.484986 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.484986 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lh54w" event={"ID":"7c04e8aa-7289-4055-8c32-1fcd6c04f510","Type":"ContainerDied","Data":"0399470f1035d2c21cbd91f6909abd2e102c917d3742a5884ecf0f8611ab8ab7"} Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.486307 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1f422f4-afd1-4794-85b1-cb82712e004a","Type":"ContainerStarted","Data":"b6327113e9267bbe69c47cb409531f082d11414f7d2d13b5d814b1df8da59b36"} Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.487533 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" event={"ID":"c831765e-262d-4068-9411-8bbf3bbd0190","Type":"ContainerDied","Data":"07dd99b3d5d29575fb83073e8c8a9e8a03c493b338e4b8647db42ae4fcd8f28b"} Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.487586 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xdcqs" Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.503343 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f726d262-f94d-4ff3-a4ae-a51076898b72","Type":"ContainerStarted","Data":"6d8a4135ea86a709663236934361ab4a87cd7e4510e4b02275cd29a4ba93acbf"} Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.513661 4898 generic.go:334] "Generic (PLEG): container finished" podID="448402be-0300-4347-915e-f3a209c414e4" containerID="3bb080dde22d1265b03a268f0c6dfd2ef5e0f6d15ebf39b8c6870afbae4eb796" exitCode=0 Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.513725 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" event={"ID":"448402be-0300-4347-915e-f3a209c414e4","Type":"ContainerDied","Data":"3bb080dde22d1265b03a268f0c6dfd2ef5e0f6d15ebf39b8c6870afbae4eb796"} Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.639566 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lh54w"] Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.673475 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lh54w"] Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.710486 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xdcqs"] Jan 20 04:04:52 crc kubenswrapper[4898]: I0120 04:04:52.718666 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xdcqs"] Jan 20 04:04:53 crc kubenswrapper[4898]: I0120 04:04:53.200203 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 04:04:53 crc kubenswrapper[4898]: I0120 04:04:53.538656 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8a20df64-f80d-4506-bcf4-2cdcc1eee607","Type":"ContainerStarted","Data":"38b15ed33ed959a2471891f28336a45e956f701d7b2c98798e54eb7d116516a5"} Jan 20 04:04:53 crc kubenswrapper[4898]: I0120 04:04:53.734531 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c04e8aa-7289-4055-8c32-1fcd6c04f510" path="/var/lib/kubelet/pods/7c04e8aa-7289-4055-8c32-1fcd6c04f510/volumes" Jan 20 04:04:53 crc kubenswrapper[4898]: I0120 04:04:53.735066 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c831765e-262d-4068-9411-8bbf3bbd0190" path="/var/lib/kubelet/pods/c831765e-262d-4068-9411-8bbf3bbd0190/volumes" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.390771 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-7hcw6"] Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.393373 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.395702 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.407649 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7hcw6"] Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.516692 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n7gk8"] Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.555081 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8q5b2"] Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.556645 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.558374 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.569387 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4387ced7-ff2c-480f-826c-5765f3a17162-combined-ca-bundle\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.569448 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4387ced7-ff2c-480f-826c-5765f3a17162-ovs-rundir\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.569538 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4387ced7-ff2c-480f-826c-5765f3a17162-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.569604 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4387ced7-ff2c-480f-826c-5765f3a17162-config\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.569630 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4387ced7-ff2c-480f-826c-5765f3a17162-ovn-rundir\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.569723 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b9vh\" (UniqueName: \"kubernetes.io/projected/4387ced7-ff2c-480f-826c-5765f3a17162-kube-api-access-9b9vh\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.572295 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8q5b2"] Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.603875 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" event={"ID":"191ea2a9-3d41-4a1a-a805-f797900d51c1","Type":"ContainerStarted","Data":"f040bd1d66b7fcaa2a6ac6c91c23d592f2c9f66a3eccc98068bdccde3b6dbeb8"} Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.604859 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.626732 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" podStartSLOduration=8.797304946 podStartE2EDuration="25.62670681s" podCreationTimestamp="2026-01-20 04:04:33 +0000 UTC" firstStartedPulling="2026-01-20 04:04:34.597154992 +0000 UTC m=+921.196942851" lastFinishedPulling="2026-01-20 04:04:51.426556856 +0000 UTC m=+938.026344715" observedRunningTime="2026-01-20 04:04:58.624873082 +0000 UTC m=+945.224660931" watchObservedRunningTime="2026-01-20 04:04:58.62670681 +0000 UTC m=+945.226494669" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.670749 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-config\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.670825 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4387ced7-ff2c-480f-826c-5765f3a17162-combined-ca-bundle\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.670848 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4387ced7-ff2c-480f-826c-5765f3a17162-ovs-rundir\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.670882 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4387ced7-ff2c-480f-826c-5765f3a17162-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.670907 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.670928 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65vqf\" (UniqueName: \"kubernetes.io/projected/e59c98c1-2988-49de-8a46-2dfba16bbf45-kube-api-access-65vqf\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.670973 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.671004 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4387ced7-ff2c-480f-826c-5765f3a17162-config\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.671031 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4387ced7-ff2c-480f-826c-5765f3a17162-ovn-rundir\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.671060 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b9vh\" (UniqueName: \"kubernetes.io/projected/4387ced7-ff2c-480f-826c-5765f3a17162-kube-api-access-9b9vh\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.672332 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4387ced7-ff2c-480f-826c-5765f3a17162-ovs-rundir\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.672482 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4387ced7-ff2c-480f-826c-5765f3a17162-ovn-rundir\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.672802 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4387ced7-ff2c-480f-826c-5765f3a17162-config\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.698518 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4387ced7-ff2c-480f-826c-5765f3a17162-combined-ca-bundle\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.698884 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4387ced7-ff2c-480f-826c-5765f3a17162-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.702894 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b9vh\" (UniqueName: \"kubernetes.io/projected/4387ced7-ff2c-480f-826c-5765f3a17162-kube-api-access-9b9vh\") pod \"ovn-controller-metrics-7hcw6\" (UID: \"4387ced7-ff2c-480f-826c-5765f3a17162\") " pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.728418 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7hcw6" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.746879 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fxbxb"] Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.772453 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.772495 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65vqf\" (UniqueName: \"kubernetes.io/projected/e59c98c1-2988-49de-8a46-2dfba16bbf45-kube-api-access-65vqf\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.772551 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.772615 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-config\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.773027 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-ksczh"] Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.773250 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.773663 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-config\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.773856 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.775353 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.779637 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.794788 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-ksczh"] Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.797713 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65vqf\" (UniqueName: \"kubernetes.io/projected/e59c98c1-2988-49de-8a46-2dfba16bbf45-kube-api-access-65vqf\") pod \"dnsmasq-dns-5bf47b49b7-8q5b2\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.874701 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-config\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.874801 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.874841 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.874865 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6rxn\" (UniqueName: \"kubernetes.io/projected/30220521-8086-4376-8536-bb9cc5f4bfc5-kube-api-access-t6rxn\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.874922 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-dns-svc\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.880031 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.976345 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.976390 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6rxn\" (UniqueName: \"kubernetes.io/projected/30220521-8086-4376-8536-bb9cc5f4bfc5-kube-api-access-t6rxn\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.976468 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-dns-svc\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.976516 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-config\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.976545 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.977455 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.977500 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.977792 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-config\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:58 crc kubenswrapper[4898]: I0120 04:04:58.978009 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-dns-svc\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:59 crc kubenswrapper[4898]: I0120 04:04:59.000625 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6rxn\" (UniqueName: \"kubernetes.io/projected/30220521-8086-4376-8536-bb9cc5f4bfc5-kube-api-access-t6rxn\") pod \"dnsmasq-dns-8554648995-ksczh\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:04:59 crc kubenswrapper[4898]: I0120 04:04:59.162219 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:05:00 crc kubenswrapper[4898]: I0120 04:05:00.636242 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7hcw6"] Jan 20 04:05:00 crc kubenswrapper[4898]: I0120 04:05:00.637203 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d","Type":"ContainerStarted","Data":"a2c41cf55f579f272aead4a9eeaa2b757bea313ad084bf8981c29b3ea5c1ba28"} Jan 20 04:05:00 crc kubenswrapper[4898]: I0120 04:05:00.645684 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" podUID="191ea2a9-3d41-4a1a-a805-f797900d51c1" containerName="dnsmasq-dns" containerID="cri-o://f040bd1d66b7fcaa2a6ac6c91c23d592f2c9f66a3eccc98068bdccde3b6dbeb8" gracePeriod=10 Jan 20 04:05:00 crc kubenswrapper[4898]: I0120 04:05:00.645777 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" event={"ID":"448402be-0300-4347-915e-f3a209c414e4","Type":"ContainerStarted","Data":"ad7427bba363d2d5736a6d3f6c19c467f9bc04dc6882c55999358b9e02c58cfe"} Jan 20 04:05:00 crc kubenswrapper[4898]: I0120 04:05:00.646051 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" podUID="448402be-0300-4347-915e-f3a209c414e4" containerName="dnsmasq-dns" containerID="cri-o://ad7427bba363d2d5736a6d3f6c19c467f9bc04dc6882c55999358b9e02c58cfe" gracePeriod=10 Jan 20 04:05:00 crc kubenswrapper[4898]: I0120 04:05:00.646294 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:05:00 crc kubenswrapper[4898]: I0120 04:05:00.692094 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" podStartSLOduration=11.128994283 podStartE2EDuration="27.692074005s" podCreationTimestamp="2026-01-20 04:04:33 +0000 UTC" firstStartedPulling="2026-01-20 04:04:34.911455931 +0000 UTC m=+921.511243790" lastFinishedPulling="2026-01-20 04:04:51.474535653 +0000 UTC m=+938.074323512" observedRunningTime="2026-01-20 04:05:00.681238771 +0000 UTC m=+947.281026630" watchObservedRunningTime="2026-01-20 04:05:00.692074005 +0000 UTC m=+947.291861864" Jan 20 04:05:00 crc kubenswrapper[4898]: I0120 04:05:00.747691 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8q5b2"] Jan 20 04:05:00 crc kubenswrapper[4898]: I0120 04:05:00.831031 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-ksczh"] Jan 20 04:05:00 crc kubenswrapper[4898]: W0120 04:05:00.844179 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30220521_8086_4376_8536_bb9cc5f4bfc5.slice/crio-8052be6cfd0b1e2bf93716021589e5229ff13d0cd9cf2f995cc3aabb7a92d2f8 WatchSource:0}: Error finding container 8052be6cfd0b1e2bf93716021589e5229ff13d0cd9cf2f995cc3aabb7a92d2f8: Status 404 returned error can't find the container with id 8052be6cfd0b1e2bf93716021589e5229ff13d0cd9cf2f995cc3aabb7a92d2f8 Jan 20 04:05:00 crc kubenswrapper[4898]: W0120 04:05:00.856133 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4387ced7_ff2c_480f_826c_5765f3a17162.slice/crio-640c7fb256da3e7a20264827c7b65799363d751805074a84887d96025e4b5476 WatchSource:0}: Error finding container 640c7fb256da3e7a20264827c7b65799363d751805074a84887d96025e4b5476: Status 404 returned error can't find the container with id 640c7fb256da3e7a20264827c7b65799363d751805074a84887d96025e4b5476 Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.676503 4898 generic.go:334] "Generic (PLEG): container finished" podID="191ea2a9-3d41-4a1a-a805-f797900d51c1" containerID="f040bd1d66b7fcaa2a6ac6c91c23d592f2c9f66a3eccc98068bdccde3b6dbeb8" exitCode=0 Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.677063 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" event={"ID":"191ea2a9-3d41-4a1a-a805-f797900d51c1","Type":"ContainerDied","Data":"f040bd1d66b7fcaa2a6ac6c91c23d592f2c9f66a3eccc98068bdccde3b6dbeb8"} Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.677096 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" event={"ID":"191ea2a9-3d41-4a1a-a805-f797900d51c1","Type":"ContainerDied","Data":"46130267ec1459107e7d5127b3af67ac1674f1b8e571a29323713f372821529c"} Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.677111 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46130267ec1459107e7d5127b3af67ac1674f1b8e571a29323713f372821529c" Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.679478 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-ksczh" event={"ID":"30220521-8086-4376-8536-bb9cc5f4bfc5","Type":"ContainerStarted","Data":"8052be6cfd0b1e2bf93716021589e5229ff13d0cd9cf2f995cc3aabb7a92d2f8"} Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.685140 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9mdd5" event={"ID":"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef","Type":"ContainerStarted","Data":"2e1d598c88c9c9cdac0277b7958136d7f1eff9237680a526f7ce2b9bed2ec353"} Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.692107 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" event={"ID":"e59c98c1-2988-49de-8a46-2dfba16bbf45","Type":"ContainerStarted","Data":"35b0c13f580037512820a107a8f3d25d0295645162bcd0b2062f547cabee3b6c"} Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.695054 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7hcw6" event={"ID":"4387ced7-ff2c-480f-826c-5765f3a17162","Type":"ContainerStarted","Data":"640c7fb256da3e7a20264827c7b65799363d751805074a84887d96025e4b5476"} Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.696523 4898 generic.go:334] "Generic (PLEG): container finished" podID="448402be-0300-4347-915e-f3a209c414e4" containerID="ad7427bba363d2d5736a6d3f6c19c467f9bc04dc6882c55999358b9e02c58cfe" exitCode=0 Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.696597 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" event={"ID":"448402be-0300-4347-915e-f3a209c414e4","Type":"ContainerDied","Data":"ad7427bba363d2d5736a6d3f6c19c467f9bc04dc6882c55999358b9e02c58cfe"} Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.696616 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" event={"ID":"448402be-0300-4347-915e-f3a209c414e4","Type":"ContainerDied","Data":"8df9d75fdc1a379dbd9708fff0782cc1a7888ab0a3187b50026989cf51c6bd23"} Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.696626 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8df9d75fdc1a379dbd9708fff0782cc1a7888ab0a3187b50026989cf51c6bd23" Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.698198 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7f47b2f9-88d3-43e4-9f9c-da4340a63519","Type":"ContainerStarted","Data":"8d66bda023001f24f3238ddf9be1d67e65705b05764f59674b5c22c6b0e02efe"} Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.698347 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.759357 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.902031206 podStartE2EDuration="23.759326224s" podCreationTimestamp="2026-01-20 04:04:38 +0000 UTC" firstStartedPulling="2026-01-20 04:04:50.408814332 +0000 UTC m=+937.008602191" lastFinishedPulling="2026-01-20 04:04:59.26610935 +0000 UTC m=+945.865897209" observedRunningTime="2026-01-20 04:05:01.752596202 +0000 UTC m=+948.352384071" watchObservedRunningTime="2026-01-20 04:05:01.759326224 +0000 UTC m=+948.359114083" Jan 20 04:05:01 crc kubenswrapper[4898]: I0120 04:05:01.989581 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.147992 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-config\") pod \"448402be-0300-4347-915e-f3a209c414e4\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.148116 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzcv7\" (UniqueName: \"kubernetes.io/projected/448402be-0300-4347-915e-f3a209c414e4-kube-api-access-nzcv7\") pod \"448402be-0300-4347-915e-f3a209c414e4\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.148295 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-dns-svc\") pod \"448402be-0300-4347-915e-f3a209c414e4\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.188122 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.209208 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448402be-0300-4347-915e-f3a209c414e4-kube-api-access-nzcv7" (OuterVolumeSpecName: "kube-api-access-nzcv7") pod "448402be-0300-4347-915e-f3a209c414e4" (UID: "448402be-0300-4347-915e-f3a209c414e4"). InnerVolumeSpecName "kube-api-access-nzcv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.234054 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "448402be-0300-4347-915e-f3a209c414e4" (UID: "448402be-0300-4347-915e-f3a209c414e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.249601 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-config" (OuterVolumeSpecName: "config") pod "448402be-0300-4347-915e-f3a209c414e4" (UID: "448402be-0300-4347-915e-f3a209c414e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.249977 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-config\") pod \"448402be-0300-4347-915e-f3a209c414e4\" (UID: \"448402be-0300-4347-915e-f3a209c414e4\") " Jan 20 04:05:02 crc kubenswrapper[4898]: W0120 04:05:02.250617 4898 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/448402be-0300-4347-915e-f3a209c414e4/volumes/kubernetes.io~configmap/config Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.250660 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-config" (OuterVolumeSpecName: "config") pod "448402be-0300-4347-915e-f3a209c414e4" (UID: "448402be-0300-4347-915e-f3a209c414e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.251164 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.251191 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzcv7\" (UniqueName: \"kubernetes.io/projected/448402be-0300-4347-915e-f3a209c414e4-kube-api-access-nzcv7\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.251206 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/448402be-0300-4347-915e-f3a209c414e4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.352698 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r5px\" (UniqueName: \"kubernetes.io/projected/191ea2a9-3d41-4a1a-a805-f797900d51c1-kube-api-access-5r5px\") pod \"191ea2a9-3d41-4a1a-a805-f797900d51c1\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.353075 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-config\") pod \"191ea2a9-3d41-4a1a-a805-f797900d51c1\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.353340 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-dns-svc\") pod \"191ea2a9-3d41-4a1a-a805-f797900d51c1\" (UID: \"191ea2a9-3d41-4a1a-a805-f797900d51c1\") " Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.356054 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/191ea2a9-3d41-4a1a-a805-f797900d51c1-kube-api-access-5r5px" (OuterVolumeSpecName: "kube-api-access-5r5px") pod "191ea2a9-3d41-4a1a-a805-f797900d51c1" (UID: "191ea2a9-3d41-4a1a-a805-f797900d51c1"). InnerVolumeSpecName "kube-api-access-5r5px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.455070 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r5px\" (UniqueName: \"kubernetes.io/projected/191ea2a9-3d41-4a1a-a805-f797900d51c1-kube-api-access-5r5px\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.542006 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "191ea2a9-3d41-4a1a-a805-f797900d51c1" (UID: "191ea2a9-3d41-4a1a-a805-f797900d51c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.551677 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-config" (OuterVolumeSpecName: "config") pod "191ea2a9-3d41-4a1a-a805-f797900d51c1" (UID: "191ea2a9-3d41-4a1a-a805-f797900d51c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.558119 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.558140 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/191ea2a9-3d41-4a1a-a805-f797900d51c1-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.707974 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"82d86080-ab0b-4b48-9847-ead3c4bcc6c4","Type":"ContainerStarted","Data":"8f216a79578b9d7bc5c3ddae13aa5ee78c202eda58ee9a9b0c3da01c5968faa1"} Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.710818 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f726d262-f94d-4ff3-a4ae-a51076898b72","Type":"ContainerStarted","Data":"ab47922e6ca9317a41b162663cd26342d8981421d0e239e7808aace9c0474273"} Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.712450 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48","Type":"ContainerStarted","Data":"80d5aaa819f1e01ee5c665789ec2ed83a4bffebdc2e9bb911204502b2bff4260"} Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.714511 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d881812d-76d9-4618-8e72-815f0d9571f5","Type":"ContainerStarted","Data":"62bb4269d4b8ebb29a0c3dc83c0bcb1f3179b5128fe5725e125d2532abff148c"} Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.714658 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.716309 4898 generic.go:334] "Generic (PLEG): container finished" podID="e59c98c1-2988-49de-8a46-2dfba16bbf45" containerID="b24b65914d37eb7ffb476f1e823c3369ecd0d2885e1b395ad3953dbcd3597ece" exitCode=0 Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.716381 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" event={"ID":"e59c98c1-2988-49de-8a46-2dfba16bbf45","Type":"ContainerDied","Data":"b24b65914d37eb7ffb476f1e823c3369ecd0d2885e1b395ad3953dbcd3597ece"} Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.718311 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1f422f4-afd1-4794-85b1-cb82712e004a","Type":"ContainerStarted","Data":"d0e1a4748f118d191c7c7524e20f05dda6ac89b1268758bbde0dd600550730db"} Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.723644 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8a20df64-f80d-4506-bcf4-2cdcc1eee607","Type":"ContainerStarted","Data":"259eb329722a9709a0223e601b600bd5d11be7484c29b715aa5b573a8699bd45"} Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.724731 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ln6nh" event={"ID":"a6903c19-3320-443c-8713-105a39a65527","Type":"ContainerStarted","Data":"b39e7fa6c41dde1c311e90623d438bc57f6acfe14dc195e63862299f6cee650a"} Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.725553 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ln6nh" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.735712 4898 generic.go:334] "Generic (PLEG): container finished" podID="30220521-8086-4376-8536-bb9cc5f4bfc5" containerID="bf02eda8b1f4769afcd44eb0c1dbf3420eb52fc2ecbc9c5f40188d9e9a5ebc3b" exitCode=0 Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.735809 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-ksczh" event={"ID":"30220521-8086-4376-8536-bb9cc5f4bfc5","Type":"ContainerDied","Data":"bf02eda8b1f4769afcd44eb0c1dbf3420eb52fc2ecbc9c5f40188d9e9a5ebc3b"} Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.759064 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9mdd5" event={"ID":"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef","Type":"ContainerDied","Data":"2e1d598c88c9c9cdac0277b7958136d7f1eff9237680a526f7ce2b9bed2ec353"} Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.758600 4898 generic.go:334] "Generic (PLEG): container finished" podID="ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef" containerID="2e1d598c88c9c9cdac0277b7958136d7f1eff9237680a526f7ce2b9bed2ec353" exitCode=0 Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.759277 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-n7gk8" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.759524 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fxbxb" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.772867 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.051348558 podStartE2EDuration="22.772840255s" podCreationTimestamp="2026-01-20 04:04:40 +0000 UTC" firstStartedPulling="2026-01-20 04:04:51.965572852 +0000 UTC m=+938.565360701" lastFinishedPulling="2026-01-20 04:05:01.687064529 +0000 UTC m=+948.286852398" observedRunningTime="2026-01-20 04:05:02.772343109 +0000 UTC m=+949.372130968" watchObservedRunningTime="2026-01-20 04:05:02.772840255 +0000 UTC m=+949.372628104" Jan 20 04:05:02 crc kubenswrapper[4898]: I0120 04:05:02.803000 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ln6nh" podStartSLOduration=9.980025719 podStartE2EDuration="17.802980249s" podCreationTimestamp="2026-01-20 04:04:45 +0000 UTC" firstStartedPulling="2026-01-20 04:04:52.293525152 +0000 UTC m=+938.893313011" lastFinishedPulling="2026-01-20 04:05:00.116479672 +0000 UTC m=+946.716267541" observedRunningTime="2026-01-20 04:05:02.801956106 +0000 UTC m=+949.401743975" watchObservedRunningTime="2026-01-20 04:05:02.802980249 +0000 UTC m=+949.402768108" Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.000641 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n7gk8"] Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.009387 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n7gk8"] Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.016504 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fxbxb"] Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.024814 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fxbxb"] Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.737068 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="191ea2a9-3d41-4a1a-a805-f797900d51c1" path="/var/lib/kubelet/pods/191ea2a9-3d41-4a1a-a805-f797900d51c1/volumes" Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.741922 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="448402be-0300-4347-915e-f3a209c414e4" path="/var/lib/kubelet/pods/448402be-0300-4347-915e-f3a209c414e4/volumes" Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.780382 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-ksczh" event={"ID":"30220521-8086-4376-8536-bb9cc5f4bfc5","Type":"ContainerStarted","Data":"9561e03d092a8f25a11876529cb3f049ded9383704c43e2cf7a78cf7f6281ad9"} Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.780667 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.784951 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9mdd5" event={"ID":"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef","Type":"ContainerStarted","Data":"25eb31f9973689c170ea3f1cd9423d535a2c7dc596f5ba22729a5a6f915c13e8"} Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.784977 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9mdd5" event={"ID":"ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef","Type":"ContainerStarted","Data":"721db57e7b640bcdc70111e1f3ee753f48fb123981ff1d8fc3e4b0a5f04d6760"} Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.785549 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.785580 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.789210 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" event={"ID":"e59c98c1-2988-49de-8a46-2dfba16bbf45","Type":"ContainerStarted","Data":"62741871c24e1bdd0511e5ec8423d38d2fb39742de366ab4b43fdfc286c97f70"} Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.804158 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-ksczh" podStartSLOduration=5.804138239 podStartE2EDuration="5.804138239s" podCreationTimestamp="2026-01-20 04:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:03.802319772 +0000 UTC m=+950.402107651" watchObservedRunningTime="2026-01-20 04:05:03.804138239 +0000 UTC m=+950.403926098" Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.823253 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" podStartSLOduration=5.823231622 podStartE2EDuration="5.823231622s" podCreationTimestamp="2026-01-20 04:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:03.821696044 +0000 UTC m=+950.421483923" watchObservedRunningTime="2026-01-20 04:05:03.823231622 +0000 UTC m=+950.423019471" Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.847092 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9mdd5" podStartSLOduration=11.251003274 podStartE2EDuration="18.847071187s" podCreationTimestamp="2026-01-20 04:04:45 +0000 UTC" firstStartedPulling="2026-01-20 04:04:52.393847846 +0000 UTC m=+938.993635705" lastFinishedPulling="2026-01-20 04:04:59.989915719 +0000 UTC m=+946.589703618" observedRunningTime="2026-01-20 04:05:03.841966005 +0000 UTC m=+950.441753864" watchObservedRunningTime="2026-01-20 04:05:03.847071187 +0000 UTC m=+950.446859046" Jan 20 04:05:03 crc kubenswrapper[4898]: I0120 04:05:03.893668 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:05:04 crc kubenswrapper[4898]: I0120 04:05:04.807457 4898 generic.go:334] "Generic (PLEG): container finished" podID="ec4f8f5c-5a5e-4c01-a81a-567a6e62176d" containerID="a2c41cf55f579f272aead4a9eeaa2b757bea313ad084bf8981c29b3ea5c1ba28" exitCode=0 Jan 20 04:05:04 crc kubenswrapper[4898]: I0120 04:05:04.807636 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d","Type":"ContainerDied","Data":"a2c41cf55f579f272aead4a9eeaa2b757bea313ad084bf8981c29b3ea5c1ba28"} Jan 20 04:05:05 crc kubenswrapper[4898]: I0120 04:05:05.817402 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8a20df64-f80d-4506-bcf4-2cdcc1eee607","Type":"ContainerStarted","Data":"7495a8deb289aedd88caf04af5d00f41252af7529ab9dd93925847195b6547e3"} Jan 20 04:05:05 crc kubenswrapper[4898]: I0120 04:05:05.821367 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7hcw6" event={"ID":"4387ced7-ff2c-480f-826c-5765f3a17162","Type":"ContainerStarted","Data":"e6ced2d7702c9ce06fecc221d5712d1e07b29b99e6914da1708e02e6fb576677"} Jan 20 04:05:05 crc kubenswrapper[4898]: I0120 04:05:05.824110 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"82d86080-ab0b-4b48-9847-ead3c4bcc6c4","Type":"ContainerStarted","Data":"9516e8bd4d39d02a95da1084116655c291b738e8b34d314c3c1ba58b1f70e742"} Jan 20 04:05:05 crc kubenswrapper[4898]: I0120 04:05:05.828375 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ec4f8f5c-5a5e-4c01-a81a-567a6e62176d","Type":"ContainerStarted","Data":"ab9fa200f2edf6a6ac6730f88e8b5b88b297ab5e14e1cc880ee74e84e4790f8e"} Jan 20 04:05:05 crc kubenswrapper[4898]: I0120 04:05:05.851297 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.961840942 podStartE2EDuration="22.851271637s" podCreationTimestamp="2026-01-20 04:04:43 +0000 UTC" firstStartedPulling="2026-01-20 04:04:53.486582111 +0000 UTC m=+940.086369970" lastFinishedPulling="2026-01-20 04:05:05.376012786 +0000 UTC m=+951.975800665" observedRunningTime="2026-01-20 04:05:05.839216865 +0000 UTC m=+952.439004724" watchObservedRunningTime="2026-01-20 04:05:05.851271637 +0000 UTC m=+952.451059536" Jan 20 04:05:05 crc kubenswrapper[4898]: I0120 04:05:05.862400 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.803693346 podStartE2EDuration="19.862382768s" podCreationTimestamp="2026-01-20 04:04:46 +0000 UTC" firstStartedPulling="2026-01-20 04:04:52.347112407 +0000 UTC m=+938.946900266" lastFinishedPulling="2026-01-20 04:05:05.405801829 +0000 UTC m=+952.005589688" observedRunningTime="2026-01-20 04:05:05.861756278 +0000 UTC m=+952.461544137" watchObservedRunningTime="2026-01-20 04:05:05.862382768 +0000 UTC m=+952.462170627" Jan 20 04:05:05 crc kubenswrapper[4898]: I0120 04:05:05.890269 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.292604575 podStartE2EDuration="30.890249759s" podCreationTimestamp="2026-01-20 04:04:35 +0000 UTC" firstStartedPulling="2026-01-20 04:04:51.366050453 +0000 UTC m=+937.965838312" lastFinishedPulling="2026-01-20 04:04:58.963695637 +0000 UTC m=+945.563483496" observedRunningTime="2026-01-20 04:05:05.886627344 +0000 UTC m=+952.486415193" watchObservedRunningTime="2026-01-20 04:05:05.890249759 +0000 UTC m=+952.490037618" Jan 20 04:05:05 crc kubenswrapper[4898]: I0120 04:05:05.905077 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-7hcw6" podStartSLOduration=3.331897818 podStartE2EDuration="7.905058407s" podCreationTimestamp="2026-01-20 04:04:58 +0000 UTC" firstStartedPulling="2026-01-20 04:05:00.859703006 +0000 UTC m=+947.459490865" lastFinishedPulling="2026-01-20 04:05:05.432863595 +0000 UTC m=+952.032651454" observedRunningTime="2026-01-20 04:05:05.904446608 +0000 UTC m=+952.504234467" watchObservedRunningTime="2026-01-20 04:05:05.905058407 +0000 UTC m=+952.504846266" Jan 20 04:05:06 crc kubenswrapper[4898]: I0120 04:05:06.120317 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 20 04:05:06 crc kubenswrapper[4898]: I0120 04:05:06.178142 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 20 04:05:06 crc kubenswrapper[4898]: I0120 04:05:06.710182 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 20 04:05:06 crc kubenswrapper[4898]: I0120 04:05:06.710564 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 20 04:05:06 crc kubenswrapper[4898]: I0120 04:05:06.836270 4898 generic.go:334] "Generic (PLEG): container finished" podID="f726d262-f94d-4ff3-a4ae-a51076898b72" containerID="ab47922e6ca9317a41b162663cd26342d8981421d0e239e7808aace9c0474273" exitCode=0 Jan 20 04:05:06 crc kubenswrapper[4898]: I0120 04:05:06.836409 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f726d262-f94d-4ff3-a4ae-a51076898b72","Type":"ContainerDied","Data":"ab47922e6ca9317a41b162663cd26342d8981421d0e239e7808aace9c0474273"} Jan 20 04:05:06 crc kubenswrapper[4898]: I0120 04:05:06.836825 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 20 04:05:07 crc kubenswrapper[4898]: I0120 04:05:07.850193 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f726d262-f94d-4ff3-a4ae-a51076898b72","Type":"ContainerStarted","Data":"6b4e451d8873010cc93f71f25295f0e907cdbff71f60b69fdc0562b8360514cc"} Jan 20 04:05:07 crc kubenswrapper[4898]: I0120 04:05:07.887567 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.889285157 podStartE2EDuration="31.88754179s" podCreationTimestamp="2026-01-20 04:04:36 +0000 UTC" firstStartedPulling="2026-01-20 04:04:51.959640524 +0000 UTC m=+938.559428383" lastFinishedPulling="2026-01-20 04:04:59.957897117 +0000 UTC m=+946.557685016" observedRunningTime="2026-01-20 04:05:07.87932571 +0000 UTC m=+954.479113609" watchObservedRunningTime="2026-01-20 04:05:07.88754179 +0000 UTC m=+954.487329689" Jan 20 04:05:07 crc kubenswrapper[4898]: I0120 04:05:07.924085 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 20 04:05:08 crc kubenswrapper[4898]: I0120 04:05:08.220471 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 20 04:05:08 crc kubenswrapper[4898]: I0120 04:05:08.220888 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 20 04:05:08 crc kubenswrapper[4898]: I0120 04:05:08.574717 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 20 04:05:08 crc kubenswrapper[4898]: I0120 04:05:08.625472 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 20 04:05:08 crc kubenswrapper[4898]: I0120 04:05:08.682770 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 20 04:05:08 crc kubenswrapper[4898]: I0120 04:05:08.857412 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 20 04:05:08 crc kubenswrapper[4898]: I0120 04:05:08.881687 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:05:08 crc kubenswrapper[4898]: I0120 04:05:08.918213 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.142495 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 20 04:05:09 crc kubenswrapper[4898]: E0120 04:05:09.143402 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191ea2a9-3d41-4a1a-a805-f797900d51c1" containerName="init" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.143419 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="191ea2a9-3d41-4a1a-a805-f797900d51c1" containerName="init" Jan 20 04:05:09 crc kubenswrapper[4898]: E0120 04:05:09.143464 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191ea2a9-3d41-4a1a-a805-f797900d51c1" containerName="dnsmasq-dns" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.143473 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="191ea2a9-3d41-4a1a-a805-f797900d51c1" containerName="dnsmasq-dns" Jan 20 04:05:09 crc kubenswrapper[4898]: E0120 04:05:09.143502 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448402be-0300-4347-915e-f3a209c414e4" containerName="dnsmasq-dns" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.143511 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="448402be-0300-4347-915e-f3a209c414e4" containerName="dnsmasq-dns" Jan 20 04:05:09 crc kubenswrapper[4898]: E0120 04:05:09.143556 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448402be-0300-4347-915e-f3a209c414e4" containerName="init" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.143564 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="448402be-0300-4347-915e-f3a209c414e4" containerName="init" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.143874 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="448402be-0300-4347-915e-f3a209c414e4" containerName="dnsmasq-dns" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.143889 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="191ea2a9-3d41-4a1a-a805-f797900d51c1" containerName="dnsmasq-dns" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.152599 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.154134 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-d6pt9" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.154885 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.155084 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.156421 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.158654 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.163599 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.208348 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n6q8\" (UniqueName: \"kubernetes.io/projected/7358529c-1249-443a-b295-bf0250c63af1-kube-api-access-7n6q8\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.208642 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7358529c-1249-443a-b295-bf0250c63af1-config\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.208716 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7358529c-1249-443a-b295-bf0250c63af1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.208880 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7358529c-1249-443a-b295-bf0250c63af1-scripts\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.208988 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7358529c-1249-443a-b295-bf0250c63af1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.209075 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7358529c-1249-443a-b295-bf0250c63af1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.209159 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7358529c-1249-443a-b295-bf0250c63af1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.255811 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8q5b2"] Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.311188 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n6q8\" (UniqueName: \"kubernetes.io/projected/7358529c-1249-443a-b295-bf0250c63af1-kube-api-access-7n6q8\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.311272 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7358529c-1249-443a-b295-bf0250c63af1-config\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.311303 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7358529c-1249-443a-b295-bf0250c63af1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.311357 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7358529c-1249-443a-b295-bf0250c63af1-scripts\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.311387 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7358529c-1249-443a-b295-bf0250c63af1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.311420 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7358529c-1249-443a-b295-bf0250c63af1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.311481 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7358529c-1249-443a-b295-bf0250c63af1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.313953 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7358529c-1249-443a-b295-bf0250c63af1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.314340 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7358529c-1249-443a-b295-bf0250c63af1-scripts\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.315502 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7358529c-1249-443a-b295-bf0250c63af1-config\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.318768 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7358529c-1249-443a-b295-bf0250c63af1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.319942 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7358529c-1249-443a-b295-bf0250c63af1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.323007 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7358529c-1249-443a-b295-bf0250c63af1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.331984 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n6q8\" (UniqueName: \"kubernetes.io/projected/7358529c-1249-443a-b295-bf0250c63af1-kube-api-access-7n6q8\") pod \"ovn-northd-0\" (UID: \"7358529c-1249-443a-b295-bf0250c63af1\") " pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.472331 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 20 04:05:09 crc kubenswrapper[4898]: I0120 04:05:09.864892 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" podUID="e59c98c1-2988-49de-8a46-2dfba16bbf45" containerName="dnsmasq-dns" containerID="cri-o://62741871c24e1bdd0511e5ec8423d38d2fb39742de366ab4b43fdfc286c97f70" gracePeriod=10 Jan 20 04:05:10 crc kubenswrapper[4898]: I0120 04:05:10.059204 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 20 04:05:10 crc kubenswrapper[4898]: I0120 04:05:10.884090 4898 generic.go:334] "Generic (PLEG): container finished" podID="e59c98c1-2988-49de-8a46-2dfba16bbf45" containerID="62741871c24e1bdd0511e5ec8423d38d2fb39742de366ab4b43fdfc286c97f70" exitCode=0 Jan 20 04:05:10 crc kubenswrapper[4898]: I0120 04:05:10.884311 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" event={"ID":"e59c98c1-2988-49de-8a46-2dfba16bbf45","Type":"ContainerDied","Data":"62741871c24e1bdd0511e5ec8423d38d2fb39742de366ab4b43fdfc286c97f70"} Jan 20 04:05:10 crc kubenswrapper[4898]: I0120 04:05:10.888400 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7358529c-1249-443a-b295-bf0250c63af1","Type":"ContainerStarted","Data":"da251cbb086879dea93d27fac48c42a41ba3bcb650a4990a41b683ba9c9e9968"} Jan 20 04:05:10 crc kubenswrapper[4898]: I0120 04:05:10.942705 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qp8ts"] Jan 20 04:05:10 crc kubenswrapper[4898]: I0120 04:05:10.944326 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:10 crc kubenswrapper[4898]: I0120 04:05:10.952552 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qp8ts"] Jan 20 04:05:10 crc kubenswrapper[4898]: I0120 04:05:10.960917 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.041641 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-config\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.041698 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.041772 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.041902 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.041936 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9n4w\" (UniqueName: \"kubernetes.io/projected/457b496f-9f17-476b-bf51-30f948f83afb-kube-api-access-v9n4w\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.144538 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.144651 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.144683 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9n4w\" (UniqueName: \"kubernetes.io/projected/457b496f-9f17-476b-bf51-30f948f83afb-kube-api-access-v9n4w\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.144706 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-config\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.144725 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.146012 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-config\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.146091 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.146179 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.146227 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.164709 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9n4w\" (UniqueName: \"kubernetes.io/projected/457b496f-9f17-476b-bf51-30f948f83afb-kube-api-access-v9n4w\") pod \"dnsmasq-dns-b8fbc5445-qp8ts\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.272926 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.687926 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qp8ts"] Jan 20 04:05:11 crc kubenswrapper[4898]: I0120 04:05:11.897086 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" event={"ID":"457b496f-9f17-476b-bf51-30f948f83afb","Type":"ContainerStarted","Data":"7d7bb4811d05ef5850d21e32ee50f1584ef59083aa5becd774bf8ef35f3a6067"} Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.040856 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.047037 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.049690 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.049690 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.049871 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-d9h7b" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.052636 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.058996 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.059922 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-cache\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.059958 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfst\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-kube-api-access-kdfst\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.059987 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.060012 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-lock\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.060249 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.161925 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-cache\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.161981 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdfst\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-kube-api-access-kdfst\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.162012 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.162042 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-lock\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.162125 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: E0120 04:05:12.162283 4898 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 20 04:05:12 crc kubenswrapper[4898]: E0120 04:05:12.162316 4898 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 20 04:05:12 crc kubenswrapper[4898]: E0120 04:05:12.162362 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift podName:311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b nodeName:}" failed. No retries permitted until 2026-01-20 04:05:12.662342356 +0000 UTC m=+959.262130215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift") pod "swift-storage-0" (UID: "311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b") : configmap "swift-ring-files" not found Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.162372 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-cache\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.162538 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.162612 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-lock\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.191544 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdfst\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-kube-api-access-kdfst\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.207847 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: I0120 04:05:12.669753 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:12 crc kubenswrapper[4898]: E0120 04:05:12.669942 4898 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 20 04:05:12 crc kubenswrapper[4898]: E0120 04:05:12.670320 4898 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 20 04:05:12 crc kubenswrapper[4898]: E0120 04:05:12.670392 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift podName:311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b nodeName:}" failed. No retries permitted until 2026-01-20 04:05:13.670370452 +0000 UTC m=+960.270158311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift") pod "swift-storage-0" (UID: "311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b") : configmap "swift-ring-files" not found Jan 20 04:05:13 crc kubenswrapper[4898]: I0120 04:05:13.689857 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:13 crc kubenswrapper[4898]: E0120 04:05:13.691427 4898 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 20 04:05:13 crc kubenswrapper[4898]: E0120 04:05:13.691637 4898 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 20 04:05:13 crc kubenswrapper[4898]: E0120 04:05:13.691812 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift podName:311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b nodeName:}" failed. No retries permitted until 2026-01-20 04:05:15.691785283 +0000 UTC m=+962.291573182 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift") pod "swift-storage-0" (UID: "311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b") : configmap "swift-ring-files" not found Jan 20 04:05:13 crc kubenswrapper[4898]: I0120 04:05:13.881925 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" podUID="e59c98c1-2988-49de-8a46-2dfba16bbf45" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.106:5353: connect: connection refused" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.642410 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.809577 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-ovsdbserver-nb\") pod \"e59c98c1-2988-49de-8a46-2dfba16bbf45\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.810628 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65vqf\" (UniqueName: \"kubernetes.io/projected/e59c98c1-2988-49de-8a46-2dfba16bbf45-kube-api-access-65vqf\") pod \"e59c98c1-2988-49de-8a46-2dfba16bbf45\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.810712 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-dns-svc\") pod \"e59c98c1-2988-49de-8a46-2dfba16bbf45\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.810763 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-config\") pod \"e59c98c1-2988-49de-8a46-2dfba16bbf45\" (UID: \"e59c98c1-2988-49de-8a46-2dfba16bbf45\") " Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.816320 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59c98c1-2988-49de-8a46-2dfba16bbf45-kube-api-access-65vqf" (OuterVolumeSpecName: "kube-api-access-65vqf") pod "e59c98c1-2988-49de-8a46-2dfba16bbf45" (UID: "e59c98c1-2988-49de-8a46-2dfba16bbf45"). InnerVolumeSpecName "kube-api-access-65vqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.850408 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e59c98c1-2988-49de-8a46-2dfba16bbf45" (UID: "e59c98c1-2988-49de-8a46-2dfba16bbf45"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.850825 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e59c98c1-2988-49de-8a46-2dfba16bbf45" (UID: "e59c98c1-2988-49de-8a46-2dfba16bbf45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.852893 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-config" (OuterVolumeSpecName: "config") pod "e59c98c1-2988-49de-8a46-2dfba16bbf45" (UID: "e59c98c1-2988-49de-8a46-2dfba16bbf45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.913387 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.914255 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.914525 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65vqf\" (UniqueName: \"kubernetes.io/projected/e59c98c1-2988-49de-8a46-2dfba16bbf45-kube-api-access-65vqf\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.914686 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59c98c1-2988-49de-8a46-2dfba16bbf45-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.924540 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" event={"ID":"e59c98c1-2988-49de-8a46-2dfba16bbf45","Type":"ContainerDied","Data":"35b0c13f580037512820a107a8f3d25d0295645162bcd0b2062f547cabee3b6c"} Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.924609 4898 scope.go:117] "RemoveContainer" containerID="62741871c24e1bdd0511e5ec8423d38d2fb39742de366ab4b43fdfc286c97f70" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.924556 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-8q5b2" Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.926053 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" event={"ID":"457b496f-9f17-476b-bf51-30f948f83afb","Type":"ContainerStarted","Data":"26d97c32d0147d4d3b9b7fab46bb8da68c4025e476a428862d0422285502a92b"} Jan 20 04:05:14 crc kubenswrapper[4898]: I0120 04:05:14.945445 4898 scope.go:117] "RemoveContainer" containerID="b24b65914d37eb7ffb476f1e823c3369ecd0d2885e1b395ad3953dbcd3597ece" Jan 20 04:05:15 crc kubenswrapper[4898]: I0120 04:05:15.023853 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8q5b2"] Jan 20 04:05:15 crc kubenswrapper[4898]: I0120 04:05:15.028556 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8q5b2"] Jan 20 04:05:15 crc kubenswrapper[4898]: E0120 04:05:15.144282 4898 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.9:38228->38.102.83.9:46213: write tcp 38.102.83.9:38228->38.102.83.9:46213: write: broken pipe Jan 20 04:05:15 crc kubenswrapper[4898]: I0120 04:05:15.728893 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:15 crc kubenswrapper[4898]: E0120 04:05:15.730351 4898 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 20 04:05:15 crc kubenswrapper[4898]: E0120 04:05:15.730377 4898 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 20 04:05:15 crc kubenswrapper[4898]: E0120 04:05:15.730423 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift podName:311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b nodeName:}" failed. No retries permitted until 2026-01-20 04:05:19.730405221 +0000 UTC m=+966.330193101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift") pod "swift-storage-0" (UID: "311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b") : configmap "swift-ring-files" not found Jan 20 04:05:15 crc kubenswrapper[4898]: I0120 04:05:15.731680 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59c98c1-2988-49de-8a46-2dfba16bbf45" path="/var/lib/kubelet/pods/e59c98c1-2988-49de-8a46-2dfba16bbf45/volumes" Jan 20 04:05:15 crc kubenswrapper[4898]: I0120 04:05:15.935777 4898 generic.go:334] "Generic (PLEG): container finished" podID="457b496f-9f17-476b-bf51-30f948f83afb" containerID="26d97c32d0147d4d3b9b7fab46bb8da68c4025e476a428862d0422285502a92b" exitCode=0 Jan 20 04:05:15 crc kubenswrapper[4898]: I0120 04:05:15.935825 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" event={"ID":"457b496f-9f17-476b-bf51-30f948f83afb","Type":"ContainerDied","Data":"26d97c32d0147d4d3b9b7fab46bb8da68c4025e476a428862d0422285502a92b"} Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.062287 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-smm2m"] Jan 20 04:05:16 crc kubenswrapper[4898]: E0120 04:05:16.067145 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59c98c1-2988-49de-8a46-2dfba16bbf45" containerName="dnsmasq-dns" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.067180 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59c98c1-2988-49de-8a46-2dfba16bbf45" containerName="dnsmasq-dns" Jan 20 04:05:16 crc kubenswrapper[4898]: E0120 04:05:16.067217 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59c98c1-2988-49de-8a46-2dfba16bbf45" containerName="init" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.067226 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59c98c1-2988-49de-8a46-2dfba16bbf45" containerName="init" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.067530 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59c98c1-2988-49de-8a46-2dfba16bbf45" containerName="dnsmasq-dns" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.068216 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.073515 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.073553 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.073558 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-smm2m"] Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.078014 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.137969 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-ring-data-devices\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.138033 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-dispersionconf\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.138105 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-scripts\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.138136 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-combined-ca-bundle\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.138164 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zjs\" (UniqueName: \"kubernetes.io/projected/77d37150-8bcb-46ff-9b40-aa959b7993d2-kube-api-access-r5zjs\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.138196 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-swiftconf\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.138222 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/77d37150-8bcb-46ff-9b40-aa959b7993d2-etc-swift\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.239353 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-ring-data-devices\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.239590 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-dispersionconf\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.239726 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-scripts\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.239808 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-combined-ca-bundle\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.239878 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zjs\" (UniqueName: \"kubernetes.io/projected/77d37150-8bcb-46ff-9b40-aa959b7993d2-kube-api-access-r5zjs\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.239965 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-swiftconf\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.240062 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/77d37150-8bcb-46ff-9b40-aa959b7993d2-etc-swift\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.240416 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/77d37150-8bcb-46ff-9b40-aa959b7993d2-etc-swift\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.239984 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-ring-data-devices\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.241233 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-scripts\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.246420 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-swiftconf\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.246637 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-dispersionconf\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.246914 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-combined-ca-bundle\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.260937 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zjs\" (UniqueName: \"kubernetes.io/projected/77d37150-8bcb-46ff-9b40-aa959b7993d2-kube-api-access-r5zjs\") pod \"swift-ring-rebalance-smm2m\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.387778 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z4nsj"] Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.397020 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.400163 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4nsj"] Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.443480 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-utilities\") pod \"redhat-operators-z4nsj\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.443537 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tszzc\" (UniqueName: \"kubernetes.io/projected/911ffe57-42e1-4ea9-96e0-3109c2223da3-kube-api-access-tszzc\") pod \"redhat-operators-z4nsj\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.443567 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-catalog-content\") pod \"redhat-operators-z4nsj\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.479099 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-d9h7b" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.485420 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.545710 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-utilities\") pod \"redhat-operators-z4nsj\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.545786 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tszzc\" (UniqueName: \"kubernetes.io/projected/911ffe57-42e1-4ea9-96e0-3109c2223da3-kube-api-access-tszzc\") pod \"redhat-operators-z4nsj\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.545810 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-catalog-content\") pod \"redhat-operators-z4nsj\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.546127 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-utilities\") pod \"redhat-operators-z4nsj\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.546366 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-catalog-content\") pod \"redhat-operators-z4nsj\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.565605 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tszzc\" (UniqueName: \"kubernetes.io/projected/911ffe57-42e1-4ea9-96e0-3109c2223da3-kube-api-access-tszzc\") pod \"redhat-operators-z4nsj\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.720229 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.946960 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7358529c-1249-443a-b295-bf0250c63af1","Type":"ContainerStarted","Data":"ff950fd22c05d36d798cc6826338938058c39dbde25d7befc5da7ea12938fa6e"} Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.947002 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7358529c-1249-443a-b295-bf0250c63af1","Type":"ContainerStarted","Data":"974bfdba4d56223cf64c07da23c62d90e27f4c7594320ed3c48f90e4daefc079"} Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.947183 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.948943 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" event={"ID":"457b496f-9f17-476b-bf51-30f948f83afb","Type":"ContainerStarted","Data":"cb4199074583a7af2dbac646317eafaf23b3d964fb51addad2e8b286de7d3d3d"} Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.949294 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.969000 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.097366986 podStartE2EDuration="7.968979709s" podCreationTimestamp="2026-01-20 04:05:09 +0000 UTC" firstStartedPulling="2026-01-20 04:05:10.070452142 +0000 UTC m=+956.670240001" lastFinishedPulling="2026-01-20 04:05:15.942064855 +0000 UTC m=+962.541852724" observedRunningTime="2026-01-20 04:05:16.965907063 +0000 UTC m=+963.565694932" watchObservedRunningTime="2026-01-20 04:05:16.968979709 +0000 UTC m=+963.568767568" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.985352 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" podStartSLOduration=6.985334637 podStartE2EDuration="6.985334637s" podCreationTimestamp="2026-01-20 04:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:16.984974575 +0000 UTC m=+963.584762434" watchObservedRunningTime="2026-01-20 04:05:16.985334637 +0000 UTC m=+963.585122496" Jan 20 04:05:16 crc kubenswrapper[4898]: I0120 04:05:16.994402 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 20 04:05:17 crc kubenswrapper[4898]: I0120 04:05:17.016419 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-smm2m"] Jan 20 04:05:17 crc kubenswrapper[4898]: I0120 04:05:17.092397 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 20 04:05:17 crc kubenswrapper[4898]: I0120 04:05:17.201148 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4nsj"] Jan 20 04:05:17 crc kubenswrapper[4898]: W0120 04:05:17.205662 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod911ffe57_42e1_4ea9_96e0_3109c2223da3.slice/crio-b029e6a5c1455fd3eb009f114fdd492310a857fa2d2b1c0d002b07070f51d8e3 WatchSource:0}: Error finding container b029e6a5c1455fd3eb009f114fdd492310a857fa2d2b1c0d002b07070f51d8e3: Status 404 returned error can't find the container with id b029e6a5c1455fd3eb009f114fdd492310a857fa2d2b1c0d002b07070f51d8e3 Jan 20 04:05:17 crc kubenswrapper[4898]: I0120 04:05:17.958244 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-smm2m" event={"ID":"77d37150-8bcb-46ff-9b40-aa959b7993d2","Type":"ContainerStarted","Data":"be1cffb71562c2f6c473f1cf3a45b6bd59d8be051813b2e12fcc66e1edc29117"} Jan 20 04:05:17 crc kubenswrapper[4898]: I0120 04:05:17.962143 4898 generic.go:334] "Generic (PLEG): container finished" podID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerID="5bd46ecc51896a03ab13ab71d839f6a62ebaa90fc2304a65b2285b11f1c99eb0" exitCode=0 Jan 20 04:05:17 crc kubenswrapper[4898]: I0120 04:05:17.962329 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4nsj" event={"ID":"911ffe57-42e1-4ea9-96e0-3109c2223da3","Type":"ContainerDied","Data":"5bd46ecc51896a03ab13ab71d839f6a62ebaa90fc2304a65b2285b11f1c99eb0"} Jan 20 04:05:17 crc kubenswrapper[4898]: I0120 04:05:17.962375 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4nsj" event={"ID":"911ffe57-42e1-4ea9-96e0-3109c2223da3","Type":"ContainerStarted","Data":"b029e6a5c1455fd3eb009f114fdd492310a857fa2d2b1c0d002b07070f51d8e3"} Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.269533 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a8cf-account-create-update-5t4l9"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.270511 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a8cf-account-create-update-5t4l9" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.273299 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.284283 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f059d6d8-faf3-4f28-977d-c8786a790906-operator-scripts\") pod \"keystone-a8cf-account-create-update-5t4l9\" (UID: \"f059d6d8-faf3-4f28-977d-c8786a790906\") " pod="openstack/keystone-a8cf-account-create-update-5t4l9" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.284405 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6pzq\" (UniqueName: \"kubernetes.io/projected/f059d6d8-faf3-4f28-977d-c8786a790906-kube-api-access-t6pzq\") pod \"keystone-a8cf-account-create-update-5t4l9\" (UID: \"f059d6d8-faf3-4f28-977d-c8786a790906\") " pod="openstack/keystone-a8cf-account-create-update-5t4l9" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.288841 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a8cf-account-create-update-5t4l9"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.334511 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7g6jm"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.335645 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7g6jm" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.340841 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7g6jm"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.351786 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.389164 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f059d6d8-faf3-4f28-977d-c8786a790906-operator-scripts\") pod \"keystone-a8cf-account-create-update-5t4l9\" (UID: \"f059d6d8-faf3-4f28-977d-c8786a790906\") " pod="openstack/keystone-a8cf-account-create-update-5t4l9" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.389226 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/569429a7-1cab-4f29-9a5f-5430c3364d56-operator-scripts\") pod \"keystone-db-create-7g6jm\" (UID: \"569429a7-1cab-4f29-9a5f-5430c3364d56\") " pod="openstack/keystone-db-create-7g6jm" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.389764 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6pzq\" (UniqueName: \"kubernetes.io/projected/f059d6d8-faf3-4f28-977d-c8786a790906-kube-api-access-t6pzq\") pod \"keystone-a8cf-account-create-update-5t4l9\" (UID: \"f059d6d8-faf3-4f28-977d-c8786a790906\") " pod="openstack/keystone-a8cf-account-create-update-5t4l9" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.389974 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62mv4\" (UniqueName: \"kubernetes.io/projected/569429a7-1cab-4f29-9a5f-5430c3364d56-kube-api-access-62mv4\") pod \"keystone-db-create-7g6jm\" (UID: \"569429a7-1cab-4f29-9a5f-5430c3364d56\") " pod="openstack/keystone-db-create-7g6jm" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.390332 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f059d6d8-faf3-4f28-977d-c8786a790906-operator-scripts\") pod \"keystone-a8cf-account-create-update-5t4l9\" (UID: \"f059d6d8-faf3-4f28-977d-c8786a790906\") " pod="openstack/keystone-a8cf-account-create-update-5t4l9" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.407391 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6pzq\" (UniqueName: \"kubernetes.io/projected/f059d6d8-faf3-4f28-977d-c8786a790906-kube-api-access-t6pzq\") pod \"keystone-a8cf-account-create-update-5t4l9\" (UID: \"f059d6d8-faf3-4f28-977d-c8786a790906\") " pod="openstack/keystone-a8cf-account-create-update-5t4l9" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.463029 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.488373 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zbqjf"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.492832 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/569429a7-1cab-4f29-9a5f-5430c3364d56-operator-scripts\") pod \"keystone-db-create-7g6jm\" (UID: \"569429a7-1cab-4f29-9a5f-5430c3364d56\") " pod="openstack/keystone-db-create-7g6jm" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.492963 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62mv4\" (UniqueName: \"kubernetes.io/projected/569429a7-1cab-4f29-9a5f-5430c3364d56-kube-api-access-62mv4\") pod \"keystone-db-create-7g6jm\" (UID: \"569429a7-1cab-4f29-9a5f-5430c3364d56\") " pod="openstack/keystone-db-create-7g6jm" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.493882 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/569429a7-1cab-4f29-9a5f-5430c3364d56-operator-scripts\") pod \"keystone-db-create-7g6jm\" (UID: \"569429a7-1cab-4f29-9a5f-5430c3364d56\") " pod="openstack/keystone-db-create-7g6jm" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.494267 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zbqjf" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.513303 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zbqjf"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.539571 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62mv4\" (UniqueName: \"kubernetes.io/projected/569429a7-1cab-4f29-9a5f-5430c3364d56-kube-api-access-62mv4\") pod \"keystone-db-create-7g6jm\" (UID: \"569429a7-1cab-4f29-9a5f-5430c3364d56\") " pod="openstack/keystone-db-create-7g6jm" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.555024 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85a1-account-create-update-8wh6v"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.556018 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85a1-account-create-update-8wh6v" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.557981 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.565340 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85a1-account-create-update-8wh6v"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.589889 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a8cf-account-create-update-5t4l9" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.594905 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55rbg\" (UniqueName: \"kubernetes.io/projected/33bf9790-4fcb-4959-b8b3-2f77741968c7-kube-api-access-55rbg\") pod \"placement-db-create-zbqjf\" (UID: \"33bf9790-4fcb-4959-b8b3-2f77741968c7\") " pod="openstack/placement-db-create-zbqjf" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.594955 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lww5p\" (UniqueName: \"kubernetes.io/projected/bcc07dbb-703a-49b5-be97-6162c5fba9e0-kube-api-access-lww5p\") pod \"placement-85a1-account-create-update-8wh6v\" (UID: \"bcc07dbb-703a-49b5-be97-6162c5fba9e0\") " pod="openstack/placement-85a1-account-create-update-8wh6v" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.595017 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc07dbb-703a-49b5-be97-6162c5fba9e0-operator-scripts\") pod \"placement-85a1-account-create-update-8wh6v\" (UID: \"bcc07dbb-703a-49b5-be97-6162c5fba9e0\") " pod="openstack/placement-85a1-account-create-update-8wh6v" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.595041 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33bf9790-4fcb-4959-b8b3-2f77741968c7-operator-scripts\") pod \"placement-db-create-zbqjf\" (UID: \"33bf9790-4fcb-4959-b8b3-2f77741968c7\") " pod="openstack/placement-db-create-zbqjf" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.655824 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7g6jm" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.696230 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc07dbb-703a-49b5-be97-6162c5fba9e0-operator-scripts\") pod \"placement-85a1-account-create-update-8wh6v\" (UID: \"bcc07dbb-703a-49b5-be97-6162c5fba9e0\") " pod="openstack/placement-85a1-account-create-update-8wh6v" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.696699 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33bf9790-4fcb-4959-b8b3-2f77741968c7-operator-scripts\") pod \"placement-db-create-zbqjf\" (UID: \"33bf9790-4fcb-4959-b8b3-2f77741968c7\") " pod="openstack/placement-db-create-zbqjf" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.696790 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55rbg\" (UniqueName: \"kubernetes.io/projected/33bf9790-4fcb-4959-b8b3-2f77741968c7-kube-api-access-55rbg\") pod \"placement-db-create-zbqjf\" (UID: \"33bf9790-4fcb-4959-b8b3-2f77741968c7\") " pod="openstack/placement-db-create-zbqjf" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.696819 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lww5p\" (UniqueName: \"kubernetes.io/projected/bcc07dbb-703a-49b5-be97-6162c5fba9e0-kube-api-access-lww5p\") pod \"placement-85a1-account-create-update-8wh6v\" (UID: \"bcc07dbb-703a-49b5-be97-6162c5fba9e0\") " pod="openstack/placement-85a1-account-create-update-8wh6v" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.697202 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc07dbb-703a-49b5-be97-6162c5fba9e0-operator-scripts\") pod \"placement-85a1-account-create-update-8wh6v\" (UID: \"bcc07dbb-703a-49b5-be97-6162c5fba9e0\") " pod="openstack/placement-85a1-account-create-update-8wh6v" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.697412 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33bf9790-4fcb-4959-b8b3-2f77741968c7-operator-scripts\") pod \"placement-db-create-zbqjf\" (UID: \"33bf9790-4fcb-4959-b8b3-2f77741968c7\") " pod="openstack/placement-db-create-zbqjf" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.760089 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dkf2b"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.761523 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dkf2b" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.762223 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lww5p\" (UniqueName: \"kubernetes.io/projected/bcc07dbb-703a-49b5-be97-6162c5fba9e0-kube-api-access-lww5p\") pod \"placement-85a1-account-create-update-8wh6v\" (UID: \"bcc07dbb-703a-49b5-be97-6162c5fba9e0\") " pod="openstack/placement-85a1-account-create-update-8wh6v" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.770753 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dkf2b"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.777204 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55rbg\" (UniqueName: \"kubernetes.io/projected/33bf9790-4fcb-4959-b8b3-2f77741968c7-kube-api-access-55rbg\") pod \"placement-db-create-zbqjf\" (UID: \"33bf9790-4fcb-4959-b8b3-2f77741968c7\") " pod="openstack/placement-db-create-zbqjf" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.798751 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk6mc\" (UniqueName: \"kubernetes.io/projected/659ed26d-0996-42cf-9288-f9c6567f61a8-kube-api-access-wk6mc\") pod \"glance-db-create-dkf2b\" (UID: \"659ed26d-0996-42cf-9288-f9c6567f61a8\") " pod="openstack/glance-db-create-dkf2b" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.798822 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/659ed26d-0996-42cf-9288-f9c6567f61a8-operator-scripts\") pod \"glance-db-create-dkf2b\" (UID: \"659ed26d-0996-42cf-9288-f9c6567f61a8\") " pod="openstack/glance-db-create-dkf2b" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.871109 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zbqjf" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.881035 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85a1-account-create-update-8wh6v" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.892676 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d625-account-create-update-dkfhm"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.894677 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d625-account-create-update-dkfhm" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.896400 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.901003 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d625-account-create-update-dkfhm"] Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.901088 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk6mc\" (UniqueName: \"kubernetes.io/projected/659ed26d-0996-42cf-9288-f9c6567f61a8-kube-api-access-wk6mc\") pod \"glance-db-create-dkf2b\" (UID: \"659ed26d-0996-42cf-9288-f9c6567f61a8\") " pod="openstack/glance-db-create-dkf2b" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.901216 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/659ed26d-0996-42cf-9288-f9c6567f61a8-operator-scripts\") pod \"glance-db-create-dkf2b\" (UID: \"659ed26d-0996-42cf-9288-f9c6567f61a8\") " pod="openstack/glance-db-create-dkf2b" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.902362 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/659ed26d-0996-42cf-9288-f9c6567f61a8-operator-scripts\") pod \"glance-db-create-dkf2b\" (UID: \"659ed26d-0996-42cf-9288-f9c6567f61a8\") " pod="openstack/glance-db-create-dkf2b" Jan 20 04:05:18 crc kubenswrapper[4898]: I0120 04:05:18.919059 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk6mc\" (UniqueName: \"kubernetes.io/projected/659ed26d-0996-42cf-9288-f9c6567f61a8-kube-api-access-wk6mc\") pod \"glance-db-create-dkf2b\" (UID: \"659ed26d-0996-42cf-9288-f9c6567f61a8\") " pod="openstack/glance-db-create-dkf2b" Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.004336 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c413de1-fc94-4f5e-b697-fb6f94d99d46-operator-scripts\") pod \"glance-d625-account-create-update-dkfhm\" (UID: \"0c413de1-fc94-4f5e-b697-fb6f94d99d46\") " pod="openstack/glance-d625-account-create-update-dkfhm" Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.004715 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qrsw\" (UniqueName: \"kubernetes.io/projected/0c413de1-fc94-4f5e-b697-fb6f94d99d46-kube-api-access-4qrsw\") pod \"glance-d625-account-create-update-dkfhm\" (UID: \"0c413de1-fc94-4f5e-b697-fb6f94d99d46\") " pod="openstack/glance-d625-account-create-update-dkfhm" Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.106350 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c413de1-fc94-4f5e-b697-fb6f94d99d46-operator-scripts\") pod \"glance-d625-account-create-update-dkfhm\" (UID: \"0c413de1-fc94-4f5e-b697-fb6f94d99d46\") " pod="openstack/glance-d625-account-create-update-dkfhm" Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.106462 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qrsw\" (UniqueName: \"kubernetes.io/projected/0c413de1-fc94-4f5e-b697-fb6f94d99d46-kube-api-access-4qrsw\") pod \"glance-d625-account-create-update-dkfhm\" (UID: \"0c413de1-fc94-4f5e-b697-fb6f94d99d46\") " pod="openstack/glance-d625-account-create-update-dkfhm" Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.125193 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c413de1-fc94-4f5e-b697-fb6f94d99d46-operator-scripts\") pod \"glance-d625-account-create-update-dkfhm\" (UID: \"0c413de1-fc94-4f5e-b697-fb6f94d99d46\") " pod="openstack/glance-d625-account-create-update-dkfhm" Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.135690 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qrsw\" (UniqueName: \"kubernetes.io/projected/0c413de1-fc94-4f5e-b697-fb6f94d99d46-kube-api-access-4qrsw\") pod \"glance-d625-account-create-update-dkfhm\" (UID: \"0c413de1-fc94-4f5e-b697-fb6f94d99d46\") " pod="openstack/glance-d625-account-create-update-dkfhm" Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.150058 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dkf2b" Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.156811 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a8cf-account-create-update-5t4l9"] Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.222276 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d625-account-create-update-dkfhm" Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.257538 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7g6jm"] Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.412575 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85a1-account-create-update-8wh6v"] Jan 20 04:05:19 crc kubenswrapper[4898]: W0120 04:05:19.437328 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcc07dbb_703a_49b5_be97_6162c5fba9e0.slice/crio-46b20ad7f6c00f568bb857bcd3207f736d6fac1cef186044a49af77e41d07ab0 WatchSource:0}: Error finding container 46b20ad7f6c00f568bb857bcd3207f736d6fac1cef186044a49af77e41d07ab0: Status 404 returned error can't find the container with id 46b20ad7f6c00f568bb857bcd3207f736d6fac1cef186044a49af77e41d07ab0 Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.522149 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zbqjf"] Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.707171 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dkf2b"] Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.747464 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:19 crc kubenswrapper[4898]: E0120 04:05:19.747716 4898 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 20 04:05:19 crc kubenswrapper[4898]: E0120 04:05:19.747735 4898 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 20 04:05:19 crc kubenswrapper[4898]: E0120 04:05:19.747778 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift podName:311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b nodeName:}" failed. No retries permitted until 2026-01-20 04:05:27.747765145 +0000 UTC m=+974.347553004 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift") pod "swift-storage-0" (UID: "311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b") : configmap "swift-ring-files" not found Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.847758 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d625-account-create-update-dkfhm"] Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.978358 4898 generic.go:334] "Generic (PLEG): container finished" podID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerID="c26420b85699b133bf687c84696b80c48f0fb888f119a7ed5e0dfa75a103edb9" exitCode=0 Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.978449 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4nsj" event={"ID":"911ffe57-42e1-4ea9-96e0-3109c2223da3","Type":"ContainerDied","Data":"c26420b85699b133bf687c84696b80c48f0fb888f119a7ed5e0dfa75a103edb9"} Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.987219 4898 generic.go:334] "Generic (PLEG): container finished" podID="f059d6d8-faf3-4f28-977d-c8786a790906" containerID="abbd009e408fa9f3d13162797ae7e15894ddb91e37cd53f98b1c7dd138690d65" exitCode=0 Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.987294 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a8cf-account-create-update-5t4l9" event={"ID":"f059d6d8-faf3-4f28-977d-c8786a790906","Type":"ContainerDied","Data":"abbd009e408fa9f3d13162797ae7e15894ddb91e37cd53f98b1c7dd138690d65"} Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.987662 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a8cf-account-create-update-5t4l9" event={"ID":"f059d6d8-faf3-4f28-977d-c8786a790906","Type":"ContainerStarted","Data":"417536482bf9e1c611c53f816836658ce4daa827d5e3be408439215d886aad92"} Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.998587 4898 generic.go:334] "Generic (PLEG): container finished" podID="33bf9790-4fcb-4959-b8b3-2f77741968c7" containerID="1b3254c578de968d14f3a705b2f2532cab8b1d8099cd414e982661a631d3b83c" exitCode=0 Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.998715 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zbqjf" event={"ID":"33bf9790-4fcb-4959-b8b3-2f77741968c7","Type":"ContainerDied","Data":"1b3254c578de968d14f3a705b2f2532cab8b1d8099cd414e982661a631d3b83c"} Jan 20 04:05:19 crc kubenswrapper[4898]: I0120 04:05:19.998743 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zbqjf" event={"ID":"33bf9790-4fcb-4959-b8b3-2f77741968c7","Type":"ContainerStarted","Data":"f0d72709755d1d76f627b67fd5d45b3e8b88eca52cda1fa1f4dd9b8c6a096272"} Jan 20 04:05:20 crc kubenswrapper[4898]: I0120 04:05:20.004229 4898 generic.go:334] "Generic (PLEG): container finished" podID="569429a7-1cab-4f29-9a5f-5430c3364d56" containerID="d86152712962ffccbae21714f2c1a828063a081ddca3d31dec292ab5b6d94f07" exitCode=0 Jan 20 04:05:20 crc kubenswrapper[4898]: I0120 04:05:20.004290 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7g6jm" event={"ID":"569429a7-1cab-4f29-9a5f-5430c3364d56","Type":"ContainerDied","Data":"d86152712962ffccbae21714f2c1a828063a081ddca3d31dec292ab5b6d94f07"} Jan 20 04:05:20 crc kubenswrapper[4898]: I0120 04:05:20.004367 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7g6jm" event={"ID":"569429a7-1cab-4f29-9a5f-5430c3364d56","Type":"ContainerStarted","Data":"5b39879599b5c7c66919df594fa9e41a7ad6bf8d00f7ec99bff298e373cb08a6"} Jan 20 04:05:20 crc kubenswrapper[4898]: I0120 04:05:20.010128 4898 generic.go:334] "Generic (PLEG): container finished" podID="bcc07dbb-703a-49b5-be97-6162c5fba9e0" containerID="6411c293fddb6f36132b72448ebaf3bd72ad0c7a618658296adc0f5d5be924dc" exitCode=0 Jan 20 04:05:20 crc kubenswrapper[4898]: I0120 04:05:20.010184 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85a1-account-create-update-8wh6v" event={"ID":"bcc07dbb-703a-49b5-be97-6162c5fba9e0","Type":"ContainerDied","Data":"6411c293fddb6f36132b72448ebaf3bd72ad0c7a618658296adc0f5d5be924dc"} Jan 20 04:05:20 crc kubenswrapper[4898]: I0120 04:05:20.010219 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85a1-account-create-update-8wh6v" event={"ID":"bcc07dbb-703a-49b5-be97-6162c5fba9e0","Type":"ContainerStarted","Data":"46b20ad7f6c00f568bb857bcd3207f736d6fac1cef186044a49af77e41d07ab0"} Jan 20 04:05:20 crc kubenswrapper[4898]: W0120 04:05:20.545459 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod659ed26d_0996_42cf_9288_f9c6567f61a8.slice/crio-6453993fbbd67c7f0dcaff8c619ae7ac1e114abf8d41e71460482f8572c93748 WatchSource:0}: Error finding container 6453993fbbd67c7f0dcaff8c619ae7ac1e114abf8d41e71460482f8572c93748: Status 404 returned error can't find the container with id 6453993fbbd67c7f0dcaff8c619ae7ac1e114abf8d41e71460482f8572c93748 Jan 20 04:05:20 crc kubenswrapper[4898]: W0120 04:05:20.550862 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c413de1_fc94_4f5e_b697_fb6f94d99d46.slice/crio-fa8777110eb3c93ff0b86a7437dfeb548778b25e4abd4328d5dde40ceecb952d WatchSource:0}: Error finding container fa8777110eb3c93ff0b86a7437dfeb548778b25e4abd4328d5dde40ceecb952d: Status 404 returned error can't find the container with id fa8777110eb3c93ff0b86a7437dfeb548778b25e4abd4328d5dde40ceecb952d Jan 20 04:05:21 crc kubenswrapper[4898]: I0120 04:05:21.021000 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d625-account-create-update-dkfhm" event={"ID":"0c413de1-fc94-4f5e-b697-fb6f94d99d46","Type":"ContainerStarted","Data":"fa8777110eb3c93ff0b86a7437dfeb548778b25e4abd4328d5dde40ceecb952d"} Jan 20 04:05:21 crc kubenswrapper[4898]: I0120 04:05:21.022264 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dkf2b" event={"ID":"659ed26d-0996-42cf-9288-f9c6567f61a8","Type":"ContainerStarted","Data":"6453993fbbd67c7f0dcaff8c619ae7ac1e114abf8d41e71460482f8572c93748"} Jan 20 04:05:21 crc kubenswrapper[4898]: I0120 04:05:21.274764 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:05:21 crc kubenswrapper[4898]: I0120 04:05:21.328551 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-ksczh"] Jan 20 04:05:21 crc kubenswrapper[4898]: I0120 04:05:21.328832 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-ksczh" podUID="30220521-8086-4376-8536-bb9cc5f4bfc5" containerName="dnsmasq-dns" containerID="cri-o://9561e03d092a8f25a11876529cb3f049ded9383704c43e2cf7a78cf7f6281ad9" gracePeriod=10 Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.036490 4898 generic.go:334] "Generic (PLEG): container finished" podID="30220521-8086-4376-8536-bb9cc5f4bfc5" containerID="9561e03d092a8f25a11876529cb3f049ded9383704c43e2cf7a78cf7f6281ad9" exitCode=0 Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.036581 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-ksczh" event={"ID":"30220521-8086-4376-8536-bb9cc5f4bfc5","Type":"ContainerDied","Data":"9561e03d092a8f25a11876529cb3f049ded9383704c43e2cf7a78cf7f6281ad9"} Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.266462 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zbqjf" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.266635 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a8cf-account-create-update-5t4l9" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.270942 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85a1-account-create-update-8wh6v" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.364112 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7g6jm" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.411994 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33bf9790-4fcb-4959-b8b3-2f77741968c7-operator-scripts\") pod \"33bf9790-4fcb-4959-b8b3-2f77741968c7\" (UID: \"33bf9790-4fcb-4959-b8b3-2f77741968c7\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.412080 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f059d6d8-faf3-4f28-977d-c8786a790906-operator-scripts\") pod \"f059d6d8-faf3-4f28-977d-c8786a790906\" (UID: \"f059d6d8-faf3-4f28-977d-c8786a790906\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.412147 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55rbg\" (UniqueName: \"kubernetes.io/projected/33bf9790-4fcb-4959-b8b3-2f77741968c7-kube-api-access-55rbg\") pod \"33bf9790-4fcb-4959-b8b3-2f77741968c7\" (UID: \"33bf9790-4fcb-4959-b8b3-2f77741968c7\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.412175 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lww5p\" (UniqueName: \"kubernetes.io/projected/bcc07dbb-703a-49b5-be97-6162c5fba9e0-kube-api-access-lww5p\") pod \"bcc07dbb-703a-49b5-be97-6162c5fba9e0\" (UID: \"bcc07dbb-703a-49b5-be97-6162c5fba9e0\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.412234 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc07dbb-703a-49b5-be97-6162c5fba9e0-operator-scripts\") pod \"bcc07dbb-703a-49b5-be97-6162c5fba9e0\" (UID: \"bcc07dbb-703a-49b5-be97-6162c5fba9e0\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.412251 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6pzq\" (UniqueName: \"kubernetes.io/projected/f059d6d8-faf3-4f28-977d-c8786a790906-kube-api-access-t6pzq\") pod \"f059d6d8-faf3-4f28-977d-c8786a790906\" (UID: \"f059d6d8-faf3-4f28-977d-c8786a790906\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.413820 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33bf9790-4fcb-4959-b8b3-2f77741968c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33bf9790-4fcb-4959-b8b3-2f77741968c7" (UID: "33bf9790-4fcb-4959-b8b3-2f77741968c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.413905 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f059d6d8-faf3-4f28-977d-c8786a790906-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f059d6d8-faf3-4f28-977d-c8786a790906" (UID: "f059d6d8-faf3-4f28-977d-c8786a790906"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.414453 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc07dbb-703a-49b5-be97-6162c5fba9e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcc07dbb-703a-49b5-be97-6162c5fba9e0" (UID: "bcc07dbb-703a-49b5-be97-6162c5fba9e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.420798 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33bf9790-4fcb-4959-b8b3-2f77741968c7-kube-api-access-55rbg" (OuterVolumeSpecName: "kube-api-access-55rbg") pod "33bf9790-4fcb-4959-b8b3-2f77741968c7" (UID: "33bf9790-4fcb-4959-b8b3-2f77741968c7"). InnerVolumeSpecName "kube-api-access-55rbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.421979 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc07dbb-703a-49b5-be97-6162c5fba9e0-kube-api-access-lww5p" (OuterVolumeSpecName: "kube-api-access-lww5p") pod "bcc07dbb-703a-49b5-be97-6162c5fba9e0" (UID: "bcc07dbb-703a-49b5-be97-6162c5fba9e0"). InnerVolumeSpecName "kube-api-access-lww5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.425686 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f059d6d8-faf3-4f28-977d-c8786a790906-kube-api-access-t6pzq" (OuterVolumeSpecName: "kube-api-access-t6pzq") pod "f059d6d8-faf3-4f28-977d-c8786a790906" (UID: "f059d6d8-faf3-4f28-977d-c8786a790906"). InnerVolumeSpecName "kube-api-access-t6pzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.514163 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62mv4\" (UniqueName: \"kubernetes.io/projected/569429a7-1cab-4f29-9a5f-5430c3364d56-kube-api-access-62mv4\") pod \"569429a7-1cab-4f29-9a5f-5430c3364d56\" (UID: \"569429a7-1cab-4f29-9a5f-5430c3364d56\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.514326 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/569429a7-1cab-4f29-9a5f-5430c3364d56-operator-scripts\") pod \"569429a7-1cab-4f29-9a5f-5430c3364d56\" (UID: \"569429a7-1cab-4f29-9a5f-5430c3364d56\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.514825 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc07dbb-703a-49b5-be97-6162c5fba9e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.514839 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6pzq\" (UniqueName: \"kubernetes.io/projected/f059d6d8-faf3-4f28-977d-c8786a790906-kube-api-access-t6pzq\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.514849 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33bf9790-4fcb-4959-b8b3-2f77741968c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.514858 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f059d6d8-faf3-4f28-977d-c8786a790906-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.514867 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55rbg\" (UniqueName: \"kubernetes.io/projected/33bf9790-4fcb-4959-b8b3-2f77741968c7-kube-api-access-55rbg\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.514875 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lww5p\" (UniqueName: \"kubernetes.io/projected/bcc07dbb-703a-49b5-be97-6162c5fba9e0-kube-api-access-lww5p\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.515228 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/569429a7-1cab-4f29-9a5f-5430c3364d56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "569429a7-1cab-4f29-9a5f-5430c3364d56" (UID: "569429a7-1cab-4f29-9a5f-5430c3364d56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.518867 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/569429a7-1cab-4f29-9a5f-5430c3364d56-kube-api-access-62mv4" (OuterVolumeSpecName: "kube-api-access-62mv4") pod "569429a7-1cab-4f29-9a5f-5430c3364d56" (UID: "569429a7-1cab-4f29-9a5f-5430c3364d56"). InnerVolumeSpecName "kube-api-access-62mv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.616583 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62mv4\" (UniqueName: \"kubernetes.io/projected/569429a7-1cab-4f29-9a5f-5430c3364d56-kube-api-access-62mv4\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.616639 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/569429a7-1cab-4f29-9a5f-5430c3364d56-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.685143 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.819394 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-nb\") pod \"30220521-8086-4376-8536-bb9cc5f4bfc5\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.819450 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-dns-svc\") pod \"30220521-8086-4376-8536-bb9cc5f4bfc5\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.819478 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6rxn\" (UniqueName: \"kubernetes.io/projected/30220521-8086-4376-8536-bb9cc5f4bfc5-kube-api-access-t6rxn\") pod \"30220521-8086-4376-8536-bb9cc5f4bfc5\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.819584 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-config\") pod \"30220521-8086-4376-8536-bb9cc5f4bfc5\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.819611 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-sb\") pod \"30220521-8086-4376-8536-bb9cc5f4bfc5\" (UID: \"30220521-8086-4376-8536-bb9cc5f4bfc5\") " Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.828601 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30220521-8086-4376-8536-bb9cc5f4bfc5-kube-api-access-t6rxn" (OuterVolumeSpecName: "kube-api-access-t6rxn") pod "30220521-8086-4376-8536-bb9cc5f4bfc5" (UID: "30220521-8086-4376-8536-bb9cc5f4bfc5"). InnerVolumeSpecName "kube-api-access-t6rxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.857243 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "30220521-8086-4376-8536-bb9cc5f4bfc5" (UID: "30220521-8086-4376-8536-bb9cc5f4bfc5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.857681 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "30220521-8086-4376-8536-bb9cc5f4bfc5" (UID: "30220521-8086-4376-8536-bb9cc5f4bfc5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.864280 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-config" (OuterVolumeSpecName: "config") pod "30220521-8086-4376-8536-bb9cc5f4bfc5" (UID: "30220521-8086-4376-8536-bb9cc5f4bfc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.864835 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30220521-8086-4376-8536-bb9cc5f4bfc5" (UID: "30220521-8086-4376-8536-bb9cc5f4bfc5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.922261 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.922315 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.922326 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.922340 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6rxn\" (UniqueName: \"kubernetes.io/projected/30220521-8086-4376-8536-bb9cc5f4bfc5-kube-api-access-t6rxn\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:22 crc kubenswrapper[4898]: I0120 04:05:22.922353 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30220521-8086-4376-8536-bb9cc5f4bfc5-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.045821 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-ksczh" event={"ID":"30220521-8086-4376-8536-bb9cc5f4bfc5","Type":"ContainerDied","Data":"8052be6cfd0b1e2bf93716021589e5229ff13d0cd9cf2f995cc3aabb7a92d2f8"} Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.045848 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-ksczh" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.045891 4898 scope.go:117] "RemoveContainer" containerID="9561e03d092a8f25a11876529cb3f049ded9383704c43e2cf7a78cf7f6281ad9" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.047374 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85a1-account-create-update-8wh6v" event={"ID":"bcc07dbb-703a-49b5-be97-6162c5fba9e0","Type":"ContainerDied","Data":"46b20ad7f6c00f568bb857bcd3207f736d6fac1cef186044a49af77e41d07ab0"} Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.047417 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46b20ad7f6c00f568bb857bcd3207f736d6fac1cef186044a49af77e41d07ab0" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.047389 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85a1-account-create-update-8wh6v" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.056473 4898 generic.go:334] "Generic (PLEG): container finished" podID="659ed26d-0996-42cf-9288-f9c6567f61a8" containerID="44ced44f63f45a8fba9ef29341a555df3f499a211ced250d0c5300b5d42298a4" exitCode=0 Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.056560 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dkf2b" event={"ID":"659ed26d-0996-42cf-9288-f9c6567f61a8","Type":"ContainerDied","Data":"44ced44f63f45a8fba9ef29341a555df3f499a211ced250d0c5300b5d42298a4"} Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.059365 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4nsj" event={"ID":"911ffe57-42e1-4ea9-96e0-3109c2223da3","Type":"ContainerStarted","Data":"ceb7e6c6a7863e3fb877e44dc1101d6c7db1b8fe0e3d7b409b2c30a5eb89f2d0"} Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.061500 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-smm2m" event={"ID":"77d37150-8bcb-46ff-9b40-aa959b7993d2","Type":"ContainerStarted","Data":"fdaa52cdea769e4e8a9c1916bc4363f6bd0d6c3f4ca621cf2baec0c2a572b29e"} Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.065976 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7g6jm" event={"ID":"569429a7-1cab-4f29-9a5f-5430c3364d56","Type":"ContainerDied","Data":"5b39879599b5c7c66919df594fa9e41a7ad6bf8d00f7ec99bff298e373cb08a6"} Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.066231 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b39879599b5c7c66919df594fa9e41a7ad6bf8d00f7ec99bff298e373cb08a6" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.065994 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7g6jm" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.066134 4898 scope.go:117] "RemoveContainer" containerID="bf02eda8b1f4769afcd44eb0c1dbf3420eb52fc2ecbc9c5f40188d9e9a5ebc3b" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.067666 4898 generic.go:334] "Generic (PLEG): container finished" podID="0c413de1-fc94-4f5e-b697-fb6f94d99d46" containerID="e7a64934f4de9c5b9893918259531267e5446e006e8ae984ed03d695c1bc3422" exitCode=0 Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.067705 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d625-account-create-update-dkfhm" event={"ID":"0c413de1-fc94-4f5e-b697-fb6f94d99d46","Type":"ContainerDied","Data":"e7a64934f4de9c5b9893918259531267e5446e006e8ae984ed03d695c1bc3422"} Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.068893 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a8cf-account-create-update-5t4l9" event={"ID":"f059d6d8-faf3-4f28-977d-c8786a790906","Type":"ContainerDied","Data":"417536482bf9e1c611c53f816836658ce4daa827d5e3be408439215d886aad92"} Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.068912 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="417536482bf9e1c611c53f816836658ce4daa827d5e3be408439215d886aad92" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.068968 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a8cf-account-create-update-5t4l9" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.071305 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zbqjf" event={"ID":"33bf9790-4fcb-4959-b8b3-2f77741968c7","Type":"ContainerDied","Data":"f0d72709755d1d76f627b67fd5d45b3e8b88eca52cda1fa1f4dd9b8c6a096272"} Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.071327 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0d72709755d1d76f627b67fd5d45b3e8b88eca52cda1fa1f4dd9b8c6a096272" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.071385 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zbqjf" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.110825 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-smm2m" podStartSLOduration=1.682625308 podStartE2EDuration="7.110809506s" podCreationTimestamp="2026-01-20 04:05:16 +0000 UTC" firstStartedPulling="2026-01-20 04:05:17.032599452 +0000 UTC m=+963.632387311" lastFinishedPulling="2026-01-20 04:05:22.46078365 +0000 UTC m=+969.060571509" observedRunningTime="2026-01-20 04:05:23.108404329 +0000 UTC m=+969.708192188" watchObservedRunningTime="2026-01-20 04:05:23.110809506 +0000 UTC m=+969.710597365" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.135612 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-ksczh"] Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.142105 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-ksczh"] Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.336474 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z4nsj" podStartSLOduration=2.841699472 podStartE2EDuration="7.336456822s" podCreationTimestamp="2026-01-20 04:05:16 +0000 UTC" firstStartedPulling="2026-01-20 04:05:17.964827422 +0000 UTC m=+964.564615281" lastFinishedPulling="2026-01-20 04:05:22.459584772 +0000 UTC m=+969.059372631" observedRunningTime="2026-01-20 04:05:23.15616736 +0000 UTC m=+969.755955239" watchObservedRunningTime="2026-01-20 04:05:23.336456822 +0000 UTC m=+969.936244681" Jan 20 04:05:23 crc kubenswrapper[4898]: I0120 04:05:23.761046 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30220521-8086-4376-8536-bb9cc5f4bfc5" path="/var/lib/kubelet/pods/30220521-8086-4376-8536-bb9cc5f4bfc5/volumes" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.499969 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dkf2b" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.506437 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d625-account-create-update-dkfhm" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.675416 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk6mc\" (UniqueName: \"kubernetes.io/projected/659ed26d-0996-42cf-9288-f9c6567f61a8-kube-api-access-wk6mc\") pod \"659ed26d-0996-42cf-9288-f9c6567f61a8\" (UID: \"659ed26d-0996-42cf-9288-f9c6567f61a8\") " Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.675525 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c413de1-fc94-4f5e-b697-fb6f94d99d46-operator-scripts\") pod \"0c413de1-fc94-4f5e-b697-fb6f94d99d46\" (UID: \"0c413de1-fc94-4f5e-b697-fb6f94d99d46\") " Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.675639 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qrsw\" (UniqueName: \"kubernetes.io/projected/0c413de1-fc94-4f5e-b697-fb6f94d99d46-kube-api-access-4qrsw\") pod \"0c413de1-fc94-4f5e-b697-fb6f94d99d46\" (UID: \"0c413de1-fc94-4f5e-b697-fb6f94d99d46\") " Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.675706 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/659ed26d-0996-42cf-9288-f9c6567f61a8-operator-scripts\") pod \"659ed26d-0996-42cf-9288-f9c6567f61a8\" (UID: \"659ed26d-0996-42cf-9288-f9c6567f61a8\") " Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.676011 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c413de1-fc94-4f5e-b697-fb6f94d99d46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c413de1-fc94-4f5e-b697-fb6f94d99d46" (UID: "0c413de1-fc94-4f5e-b697-fb6f94d99d46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.676155 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659ed26d-0996-42cf-9288-f9c6567f61a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "659ed26d-0996-42cf-9288-f9c6567f61a8" (UID: "659ed26d-0996-42cf-9288-f9c6567f61a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.689832 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659ed26d-0996-42cf-9288-f9c6567f61a8-kube-api-access-wk6mc" (OuterVolumeSpecName: "kube-api-access-wk6mc") pod "659ed26d-0996-42cf-9288-f9c6567f61a8" (UID: "659ed26d-0996-42cf-9288-f9c6567f61a8"). InnerVolumeSpecName "kube-api-access-wk6mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.689936 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c413de1-fc94-4f5e-b697-fb6f94d99d46-kube-api-access-4qrsw" (OuterVolumeSpecName: "kube-api-access-4qrsw") pod "0c413de1-fc94-4f5e-b697-fb6f94d99d46" (UID: "0c413de1-fc94-4f5e-b697-fb6f94d99d46"). InnerVolumeSpecName "kube-api-access-4qrsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.777233 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qrsw\" (UniqueName: \"kubernetes.io/projected/0c413de1-fc94-4f5e-b697-fb6f94d99d46-kube-api-access-4qrsw\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.777273 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/659ed26d-0996-42cf-9288-f9c6567f61a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.777285 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk6mc\" (UniqueName: \"kubernetes.io/projected/659ed26d-0996-42cf-9288-f9c6567f61a8-kube-api-access-wk6mc\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.777296 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c413de1-fc94-4f5e-b697-fb6f94d99d46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879112 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gz5lr"] Jan 20 04:05:24 crc kubenswrapper[4898]: E0120 04:05:24.879466 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc07dbb-703a-49b5-be97-6162c5fba9e0" containerName="mariadb-account-create-update" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879485 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc07dbb-703a-49b5-be97-6162c5fba9e0" containerName="mariadb-account-create-update" Jan 20 04:05:24 crc kubenswrapper[4898]: E0120 04:05:24.879510 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30220521-8086-4376-8536-bb9cc5f4bfc5" containerName="init" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879517 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="30220521-8086-4376-8536-bb9cc5f4bfc5" containerName="init" Jan 20 04:05:24 crc kubenswrapper[4898]: E0120 04:05:24.879529 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f059d6d8-faf3-4f28-977d-c8786a790906" containerName="mariadb-account-create-update" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879535 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f059d6d8-faf3-4f28-977d-c8786a790906" containerName="mariadb-account-create-update" Jan 20 04:05:24 crc kubenswrapper[4898]: E0120 04:05:24.879546 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bf9790-4fcb-4959-b8b3-2f77741968c7" containerName="mariadb-database-create" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879552 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bf9790-4fcb-4959-b8b3-2f77741968c7" containerName="mariadb-database-create" Jan 20 04:05:24 crc kubenswrapper[4898]: E0120 04:05:24.879563 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c413de1-fc94-4f5e-b697-fb6f94d99d46" containerName="mariadb-account-create-update" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879569 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c413de1-fc94-4f5e-b697-fb6f94d99d46" containerName="mariadb-account-create-update" Jan 20 04:05:24 crc kubenswrapper[4898]: E0120 04:05:24.879575 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569429a7-1cab-4f29-9a5f-5430c3364d56" containerName="mariadb-database-create" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879582 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="569429a7-1cab-4f29-9a5f-5430c3364d56" containerName="mariadb-database-create" Jan 20 04:05:24 crc kubenswrapper[4898]: E0120 04:05:24.879596 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30220521-8086-4376-8536-bb9cc5f4bfc5" containerName="dnsmasq-dns" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879601 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="30220521-8086-4376-8536-bb9cc5f4bfc5" containerName="dnsmasq-dns" Jan 20 04:05:24 crc kubenswrapper[4898]: E0120 04:05:24.879621 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659ed26d-0996-42cf-9288-f9c6567f61a8" containerName="mariadb-database-create" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879627 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="659ed26d-0996-42cf-9288-f9c6567f61a8" containerName="mariadb-database-create" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879765 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc07dbb-703a-49b5-be97-6162c5fba9e0" containerName="mariadb-account-create-update" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879780 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c413de1-fc94-4f5e-b697-fb6f94d99d46" containerName="mariadb-account-create-update" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879789 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="30220521-8086-4376-8536-bb9cc5f4bfc5" containerName="dnsmasq-dns" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879800 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="659ed26d-0996-42cf-9288-f9c6567f61a8" containerName="mariadb-database-create" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879811 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bf9790-4fcb-4959-b8b3-2f77741968c7" containerName="mariadb-database-create" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879822 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="569429a7-1cab-4f29-9a5f-5430c3364d56" containerName="mariadb-database-create" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.879830 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f059d6d8-faf3-4f28-977d-c8786a790906" containerName="mariadb-account-create-update" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.880933 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.905065 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz5lr"] Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.980504 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-catalog-content\") pod \"redhat-marketplace-gz5lr\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.980582 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8f49\" (UniqueName: \"kubernetes.io/projected/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-kube-api-access-l8f49\") pod \"redhat-marketplace-gz5lr\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:24 crc kubenswrapper[4898]: I0120 04:05:24.980662 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-utilities\") pod \"redhat-marketplace-gz5lr\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.082133 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-catalog-content\") pod \"redhat-marketplace-gz5lr\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.082184 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8f49\" (UniqueName: \"kubernetes.io/projected/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-kube-api-access-l8f49\") pod \"redhat-marketplace-gz5lr\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.082244 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-utilities\") pod \"redhat-marketplace-gz5lr\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.082933 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-utilities\") pod \"redhat-marketplace-gz5lr\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.082951 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-catalog-content\") pod \"redhat-marketplace-gz5lr\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.088099 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d625-account-create-update-dkfhm" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.088095 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d625-account-create-update-dkfhm" event={"ID":"0c413de1-fc94-4f5e-b697-fb6f94d99d46","Type":"ContainerDied","Data":"fa8777110eb3c93ff0b86a7437dfeb548778b25e4abd4328d5dde40ceecb952d"} Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.088152 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa8777110eb3c93ff0b86a7437dfeb548778b25e4abd4328d5dde40ceecb952d" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.093096 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dkf2b" event={"ID":"659ed26d-0996-42cf-9288-f9c6567f61a8","Type":"ContainerDied","Data":"6453993fbbd67c7f0dcaff8c619ae7ac1e114abf8d41e71460482f8572c93748"} Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.093144 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6453993fbbd67c7f0dcaff8c619ae7ac1e114abf8d41e71460482f8572c93748" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.093172 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dkf2b" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.111364 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8f49\" (UniqueName: \"kubernetes.io/projected/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-kube-api-access-l8f49\") pod \"redhat-marketplace-gz5lr\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.204346 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.376266 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lhnsj"] Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.377928 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lhnsj" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.383590 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.400545 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lhnsj"] Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.489323 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c30ea4d9-db88-4a3b-8cb7-072679021c72-operator-scripts\") pod \"root-account-create-update-lhnsj\" (UID: \"c30ea4d9-db88-4a3b-8cb7-072679021c72\") " pod="openstack/root-account-create-update-lhnsj" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.489677 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtkwp\" (UniqueName: \"kubernetes.io/projected/c30ea4d9-db88-4a3b-8cb7-072679021c72-kube-api-access-wtkwp\") pod \"root-account-create-update-lhnsj\" (UID: \"c30ea4d9-db88-4a3b-8cb7-072679021c72\") " pod="openstack/root-account-create-update-lhnsj" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.590938 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c30ea4d9-db88-4a3b-8cb7-072679021c72-operator-scripts\") pod \"root-account-create-update-lhnsj\" (UID: \"c30ea4d9-db88-4a3b-8cb7-072679021c72\") " pod="openstack/root-account-create-update-lhnsj" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.591097 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtkwp\" (UniqueName: \"kubernetes.io/projected/c30ea4d9-db88-4a3b-8cb7-072679021c72-kube-api-access-wtkwp\") pod \"root-account-create-update-lhnsj\" (UID: \"c30ea4d9-db88-4a3b-8cb7-072679021c72\") " pod="openstack/root-account-create-update-lhnsj" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.591708 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c30ea4d9-db88-4a3b-8cb7-072679021c72-operator-scripts\") pod \"root-account-create-update-lhnsj\" (UID: \"c30ea4d9-db88-4a3b-8cb7-072679021c72\") " pod="openstack/root-account-create-update-lhnsj" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.608696 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtkwp\" (UniqueName: \"kubernetes.io/projected/c30ea4d9-db88-4a3b-8cb7-072679021c72-kube-api-access-wtkwp\") pod \"root-account-create-update-lhnsj\" (UID: \"c30ea4d9-db88-4a3b-8cb7-072679021c72\") " pod="openstack/root-account-create-update-lhnsj" Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.699993 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz5lr"] Jan 20 04:05:25 crc kubenswrapper[4898]: W0120 04:05:25.712401 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7121b2e8_f0f3_48e3_9db5_8ad947f628c6.slice/crio-48c4a7eeb9b3827170062c1b02f0cbb72ff42b95f79b1c6bde1c4e731f769489 WatchSource:0}: Error finding container 48c4a7eeb9b3827170062c1b02f0cbb72ff42b95f79b1c6bde1c4e731f769489: Status 404 returned error can't find the container with id 48c4a7eeb9b3827170062c1b02f0cbb72ff42b95f79b1c6bde1c4e731f769489 Jan 20 04:05:25 crc kubenswrapper[4898]: I0120 04:05:25.756093 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lhnsj" Jan 20 04:05:26 crc kubenswrapper[4898]: I0120 04:05:26.101828 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz5lr" event={"ID":"7121b2e8-f0f3-48e3-9db5-8ad947f628c6","Type":"ContainerStarted","Data":"48c4a7eeb9b3827170062c1b02f0cbb72ff42b95f79b1c6bde1c4e731f769489"} Jan 20 04:05:26 crc kubenswrapper[4898]: I0120 04:05:26.329268 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lhnsj"] Jan 20 04:05:26 crc kubenswrapper[4898]: W0120 04:05:26.352213 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc30ea4d9_db88_4a3b_8cb7_072679021c72.slice/crio-96c1e37b54c2493a5a824d2d2a851d010b3be43055c25aaf97c34ed990151ebd WatchSource:0}: Error finding container 96c1e37b54c2493a5a824d2d2a851d010b3be43055c25aaf97c34ed990151ebd: Status 404 returned error can't find the container with id 96c1e37b54c2493a5a824d2d2a851d010b3be43055c25aaf97c34ed990151ebd Jan 20 04:05:26 crc kubenswrapper[4898]: I0120 04:05:26.720599 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:26 crc kubenswrapper[4898]: I0120 04:05:26.720711 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:27 crc kubenswrapper[4898]: I0120 04:05:27.111509 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lhnsj" event={"ID":"c30ea4d9-db88-4a3b-8cb7-072679021c72","Type":"ContainerStarted","Data":"96c1e37b54c2493a5a824d2d2a851d010b3be43055c25aaf97c34ed990151ebd"} Jan 20 04:05:27 crc kubenswrapper[4898]: I0120 04:05:27.113323 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz5lr" event={"ID":"7121b2e8-f0f3-48e3-9db5-8ad947f628c6","Type":"ContainerStarted","Data":"ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e"} Jan 20 04:05:27 crc kubenswrapper[4898]: I0120 04:05:27.753411 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:27 crc kubenswrapper[4898]: E0120 04:05:27.755050 4898 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 20 04:05:27 crc kubenswrapper[4898]: E0120 04:05:27.755071 4898 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 20 04:05:27 crc kubenswrapper[4898]: E0120 04:05:27.755103 4898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift podName:311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b nodeName:}" failed. No retries permitted until 2026-01-20 04:05:43.755089517 +0000 UTC m=+990.354877376 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift") pod "swift-storage-0" (UID: "311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b") : configmap "swift-ring-files" not found Jan 20 04:05:27 crc kubenswrapper[4898]: I0120 04:05:27.762574 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z4nsj" podUID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerName="registry-server" probeResult="failure" output=< Jan 20 04:05:27 crc kubenswrapper[4898]: timeout: failed to connect service ":50051" within 1s Jan 20 04:05:27 crc kubenswrapper[4898]: > Jan 20 04:05:28 crc kubenswrapper[4898]: I0120 04:05:28.121422 4898 generic.go:334] "Generic (PLEG): container finished" podID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" containerID="ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e" exitCode=0 Jan 20 04:05:28 crc kubenswrapper[4898]: I0120 04:05:28.121655 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz5lr" event={"ID":"7121b2e8-f0f3-48e3-9db5-8ad947f628c6","Type":"ContainerDied","Data":"ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e"} Jan 20 04:05:28 crc kubenswrapper[4898]: I0120 04:05:28.129973 4898 generic.go:334] "Generic (PLEG): container finished" podID="c30ea4d9-db88-4a3b-8cb7-072679021c72" containerID="0e311235e68a95952ed865bf20148726b7b234604f71ba9b824e7a3f6e1a5588" exitCode=0 Jan 20 04:05:28 crc kubenswrapper[4898]: I0120 04:05:28.130024 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lhnsj" event={"ID":"c30ea4d9-db88-4a3b-8cb7-072679021c72","Type":"ContainerDied","Data":"0e311235e68a95952ed865bf20148726b7b234604f71ba9b824e7a3f6e1a5588"} Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.026701 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-srv84"] Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.031300 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.034804 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-srv84"] Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.036292 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-47npl" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.038092 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.082183 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-config-data\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.082274 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8phm\" (UniqueName: \"kubernetes.io/projected/befd057f-4068-49ce-8679-33b0b01fabfc-kube-api-access-w8phm\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.082343 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-combined-ca-bundle\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.082369 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-db-sync-config-data\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.183107 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8phm\" (UniqueName: \"kubernetes.io/projected/befd057f-4068-49ce-8679-33b0b01fabfc-kube-api-access-w8phm\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.183198 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-combined-ca-bundle\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.183230 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-db-sync-config-data\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.183259 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-config-data\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.193098 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-config-data\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.193119 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-combined-ca-bundle\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.204838 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-db-sync-config-data\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.207242 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8phm\" (UniqueName: \"kubernetes.io/projected/befd057f-4068-49ce-8679-33b0b01fabfc-kube-api-access-w8phm\") pod \"glance-db-sync-srv84\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.346912 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-srv84" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.480099 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lhnsj" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.595973 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c30ea4d9-db88-4a3b-8cb7-072679021c72-operator-scripts\") pod \"c30ea4d9-db88-4a3b-8cb7-072679021c72\" (UID: \"c30ea4d9-db88-4a3b-8cb7-072679021c72\") " Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.596323 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtkwp\" (UniqueName: \"kubernetes.io/projected/c30ea4d9-db88-4a3b-8cb7-072679021c72-kube-api-access-wtkwp\") pod \"c30ea4d9-db88-4a3b-8cb7-072679021c72\" (UID: \"c30ea4d9-db88-4a3b-8cb7-072679021c72\") " Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.600318 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c30ea4d9-db88-4a3b-8cb7-072679021c72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c30ea4d9-db88-4a3b-8cb7-072679021c72" (UID: "c30ea4d9-db88-4a3b-8cb7-072679021c72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.611622 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c30ea4d9-db88-4a3b-8cb7-072679021c72-kube-api-access-wtkwp" (OuterVolumeSpecName: "kube-api-access-wtkwp") pod "c30ea4d9-db88-4a3b-8cb7-072679021c72" (UID: "c30ea4d9-db88-4a3b-8cb7-072679021c72"). InnerVolumeSpecName "kube-api-access-wtkwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.687849 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.701852 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c30ea4d9-db88-4a3b-8cb7-072679021c72-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:29 crc kubenswrapper[4898]: I0120 04:05:29.701889 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtkwp\" (UniqueName: \"kubernetes.io/projected/c30ea4d9-db88-4a3b-8cb7-072679021c72-kube-api-access-wtkwp\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:30 crc kubenswrapper[4898]: I0120 04:05:30.035671 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-srv84"] Jan 20 04:05:30 crc kubenswrapper[4898]: I0120 04:05:30.147356 4898 generic.go:334] "Generic (PLEG): container finished" podID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" containerID="e96b66fae2b118f263a00512eb3d068428666582aa404fe215a8cfa02f2b536b" exitCode=0 Jan 20 04:05:30 crc kubenswrapper[4898]: I0120 04:05:30.147476 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz5lr" event={"ID":"7121b2e8-f0f3-48e3-9db5-8ad947f628c6","Type":"ContainerDied","Data":"e96b66fae2b118f263a00512eb3d068428666582aa404fe215a8cfa02f2b536b"} Jan 20 04:05:30 crc kubenswrapper[4898]: I0120 04:05:30.153229 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-srv84" event={"ID":"befd057f-4068-49ce-8679-33b0b01fabfc","Type":"ContainerStarted","Data":"a70c6d301d177dee2dc9998f37b1261642f078a0ebbb37c0cc4ff313c8f5d470"} Jan 20 04:05:30 crc kubenswrapper[4898]: I0120 04:05:30.157942 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lhnsj" event={"ID":"c30ea4d9-db88-4a3b-8cb7-072679021c72","Type":"ContainerDied","Data":"96c1e37b54c2493a5a824d2d2a851d010b3be43055c25aaf97c34ed990151ebd"} Jan 20 04:05:30 crc kubenswrapper[4898]: I0120 04:05:30.157984 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96c1e37b54c2493a5a824d2d2a851d010b3be43055c25aaf97c34ed990151ebd" Jan 20 04:05:30 crc kubenswrapper[4898]: I0120 04:05:30.158036 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lhnsj" Jan 20 04:05:31 crc kubenswrapper[4898]: I0120 04:05:31.870097 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lhnsj"] Jan 20 04:05:31 crc kubenswrapper[4898]: I0120 04:05:31.877992 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lhnsj"] Jan 20 04:05:33 crc kubenswrapper[4898]: I0120 04:05:33.185984 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz5lr" event={"ID":"7121b2e8-f0f3-48e3-9db5-8ad947f628c6","Type":"ContainerStarted","Data":"c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7"} Jan 20 04:05:33 crc kubenswrapper[4898]: I0120 04:05:33.189166 4898 generic.go:334] "Generic (PLEG): container finished" podID="77d37150-8bcb-46ff-9b40-aa959b7993d2" containerID="fdaa52cdea769e4e8a9c1916bc4363f6bd0d6c3f4ca621cf2baec0c2a572b29e" exitCode=0 Jan 20 04:05:33 crc kubenswrapper[4898]: I0120 04:05:33.189208 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-smm2m" event={"ID":"77d37150-8bcb-46ff-9b40-aa959b7993d2","Type":"ContainerDied","Data":"fdaa52cdea769e4e8a9c1916bc4363f6bd0d6c3f4ca621cf2baec0c2a572b29e"} Jan 20 04:05:33 crc kubenswrapper[4898]: I0120 04:05:33.230275 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gz5lr" podStartSLOduration=6.527450749 podStartE2EDuration="9.230251556s" podCreationTimestamp="2026-01-20 04:05:24 +0000 UTC" firstStartedPulling="2026-01-20 04:05:28.125773694 +0000 UTC m=+974.725561563" lastFinishedPulling="2026-01-20 04:05:30.828574511 +0000 UTC m=+977.428362370" observedRunningTime="2026-01-20 04:05:33.21160479 +0000 UTC m=+979.811392669" watchObservedRunningTime="2026-01-20 04:05:33.230251556 +0000 UTC m=+979.830039415" Jan 20 04:05:33 crc kubenswrapper[4898]: I0120 04:05:33.736238 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c30ea4d9-db88-4a3b-8cb7-072679021c72" path="/var/lib/kubelet/pods/c30ea4d9-db88-4a3b-8cb7-072679021c72/volumes" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.198540 4898 generic.go:334] "Generic (PLEG): container finished" podID="1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48" containerID="80d5aaa819f1e01ee5c665789ec2ed83a4bffebdc2e9bb911204502b2bff4260" exitCode=0 Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.198629 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48","Type":"ContainerDied","Data":"80d5aaa819f1e01ee5c665789ec2ed83a4bffebdc2e9bb911204502b2bff4260"} Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.610496 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.701072 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-ring-data-devices\") pod \"77d37150-8bcb-46ff-9b40-aa959b7993d2\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.701127 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-dispersionconf\") pod \"77d37150-8bcb-46ff-9b40-aa959b7993d2\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.701157 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-scripts\") pod \"77d37150-8bcb-46ff-9b40-aa959b7993d2\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.701184 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/77d37150-8bcb-46ff-9b40-aa959b7993d2-etc-swift\") pod \"77d37150-8bcb-46ff-9b40-aa959b7993d2\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.701219 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-combined-ca-bundle\") pod \"77d37150-8bcb-46ff-9b40-aa959b7993d2\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.701277 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5zjs\" (UniqueName: \"kubernetes.io/projected/77d37150-8bcb-46ff-9b40-aa959b7993d2-kube-api-access-r5zjs\") pod \"77d37150-8bcb-46ff-9b40-aa959b7993d2\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.701335 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-swiftconf\") pod \"77d37150-8bcb-46ff-9b40-aa959b7993d2\" (UID: \"77d37150-8bcb-46ff-9b40-aa959b7993d2\") " Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.701940 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "77d37150-8bcb-46ff-9b40-aa959b7993d2" (UID: "77d37150-8bcb-46ff-9b40-aa959b7993d2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.702283 4898 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.702946 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77d37150-8bcb-46ff-9b40-aa959b7993d2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "77d37150-8bcb-46ff-9b40-aa959b7993d2" (UID: "77d37150-8bcb-46ff-9b40-aa959b7993d2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.724595 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-scripts" (OuterVolumeSpecName: "scripts") pod "77d37150-8bcb-46ff-9b40-aa959b7993d2" (UID: "77d37150-8bcb-46ff-9b40-aa959b7993d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.726092 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77d37150-8bcb-46ff-9b40-aa959b7993d2" (UID: "77d37150-8bcb-46ff-9b40-aa959b7993d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.726786 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "77d37150-8bcb-46ff-9b40-aa959b7993d2" (UID: "77d37150-8bcb-46ff-9b40-aa959b7993d2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.726915 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77d37150-8bcb-46ff-9b40-aa959b7993d2-kube-api-access-r5zjs" (OuterVolumeSpecName: "kube-api-access-r5zjs") pod "77d37150-8bcb-46ff-9b40-aa959b7993d2" (UID: "77d37150-8bcb-46ff-9b40-aa959b7993d2"). InnerVolumeSpecName "kube-api-access-r5zjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.727934 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "77d37150-8bcb-46ff-9b40-aa959b7993d2" (UID: "77d37150-8bcb-46ff-9b40-aa959b7993d2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.803683 4898 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.803713 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77d37150-8bcb-46ff-9b40-aa959b7993d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.803722 4898 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/77d37150-8bcb-46ff-9b40-aa959b7993d2-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.803731 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.803740 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5zjs\" (UniqueName: \"kubernetes.io/projected/77d37150-8bcb-46ff-9b40-aa959b7993d2-kube-api-access-r5zjs\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:34 crc kubenswrapper[4898]: I0120 04:05:34.803750 4898 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/77d37150-8bcb-46ff-9b40-aa959b7993d2-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.204677 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.205007 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.212334 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48","Type":"ContainerStarted","Data":"1a354b4ffdd02903ed1455601d351eb190795210fcbbec612706679cfc369ca4"} Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.212840 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.216562 4898 generic.go:334] "Generic (PLEG): container finished" podID="a1f422f4-afd1-4794-85b1-cb82712e004a" containerID="d0e1a4748f118d191c7c7524e20f05dda6ac89b1268758bbde0dd600550730db" exitCode=0 Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.216659 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1f422f4-afd1-4794-85b1-cb82712e004a","Type":"ContainerDied","Data":"d0e1a4748f118d191c7c7524e20f05dda6ac89b1268758bbde0dd600550730db"} Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.218506 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-smm2m" event={"ID":"77d37150-8bcb-46ff-9b40-aa959b7993d2","Type":"ContainerDied","Data":"be1cffb71562c2f6c473f1cf3a45b6bd59d8be051813b2e12fcc66e1edc29117"} Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.218541 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1cffb71562c2f6c473f1cf3a45b6bd59d8be051813b2e12fcc66e1edc29117" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.218517 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-smm2m" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.250336 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.560854941 podStartE2EDuration="1m2.250318889s" podCreationTimestamp="2026-01-20 04:04:33 +0000 UTC" firstStartedPulling="2026-01-20 04:04:51.274279651 +0000 UTC m=+937.874067520" lastFinishedPulling="2026-01-20 04:04:58.963743609 +0000 UTC m=+945.563531468" observedRunningTime="2026-01-20 04:05:35.248227493 +0000 UTC m=+981.848015352" watchObservedRunningTime="2026-01-20 04:05:35.250318889 +0000 UTC m=+981.850106758" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.277646 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.432847 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5rcwc"] Jan 20 04:05:35 crc kubenswrapper[4898]: E0120 04:05:35.433193 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c30ea4d9-db88-4a3b-8cb7-072679021c72" containerName="mariadb-account-create-update" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.433207 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c30ea4d9-db88-4a3b-8cb7-072679021c72" containerName="mariadb-account-create-update" Jan 20 04:05:35 crc kubenswrapper[4898]: E0120 04:05:35.433240 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77d37150-8bcb-46ff-9b40-aa959b7993d2" containerName="swift-ring-rebalance" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.433247 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d37150-8bcb-46ff-9b40-aa959b7993d2" containerName="swift-ring-rebalance" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.433399 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="77d37150-8bcb-46ff-9b40-aa959b7993d2" containerName="swift-ring-rebalance" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.433419 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c30ea4d9-db88-4a3b-8cb7-072679021c72" containerName="mariadb-account-create-update" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.434005 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5rcwc" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.437492 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.454914 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5rcwc"] Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.528801 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq6w2\" (UniqueName: \"kubernetes.io/projected/48bf9549-2dda-49d0-8cb3-c45cab8929ef-kube-api-access-lq6w2\") pod \"root-account-create-update-5rcwc\" (UID: \"48bf9549-2dda-49d0-8cb3-c45cab8929ef\") " pod="openstack/root-account-create-update-5rcwc" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.529165 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48bf9549-2dda-49d0-8cb3-c45cab8929ef-operator-scripts\") pod \"root-account-create-update-5rcwc\" (UID: \"48bf9549-2dda-49d0-8cb3-c45cab8929ef\") " pod="openstack/root-account-create-update-5rcwc" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.630206 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq6w2\" (UniqueName: \"kubernetes.io/projected/48bf9549-2dda-49d0-8cb3-c45cab8929ef-kube-api-access-lq6w2\") pod \"root-account-create-update-5rcwc\" (UID: \"48bf9549-2dda-49d0-8cb3-c45cab8929ef\") " pod="openstack/root-account-create-update-5rcwc" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.630323 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48bf9549-2dda-49d0-8cb3-c45cab8929ef-operator-scripts\") pod \"root-account-create-update-5rcwc\" (UID: \"48bf9549-2dda-49d0-8cb3-c45cab8929ef\") " pod="openstack/root-account-create-update-5rcwc" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.631026 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48bf9549-2dda-49d0-8cb3-c45cab8929ef-operator-scripts\") pod \"root-account-create-update-5rcwc\" (UID: \"48bf9549-2dda-49d0-8cb3-c45cab8929ef\") " pod="openstack/root-account-create-update-5rcwc" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.658516 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq6w2\" (UniqueName: \"kubernetes.io/projected/48bf9549-2dda-49d0-8cb3-c45cab8929ef-kube-api-access-lq6w2\") pod \"root-account-create-update-5rcwc\" (UID: \"48bf9549-2dda-49d0-8cb3-c45cab8929ef\") " pod="openstack/root-account-create-update-5rcwc" Jan 20 04:05:35 crc kubenswrapper[4898]: I0120 04:05:35.762554 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5rcwc" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.230807 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1f422f4-afd1-4794-85b1-cb82712e004a","Type":"ContainerStarted","Data":"5888732a7ca7ae0f4d12ecf98b9b9a7835f9c945e28ad7a6cc4fda2ff37f5d00"} Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.255049 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.293960361 podStartE2EDuration="1m2.255031699s" podCreationTimestamp="2026-01-20 04:04:34 +0000 UTC" firstStartedPulling="2026-01-20 04:04:52.042187384 +0000 UTC m=+938.641975243" lastFinishedPulling="2026-01-20 04:05:00.003258682 +0000 UTC m=+946.603046581" observedRunningTime="2026-01-20 04:05:36.251005872 +0000 UTC m=+982.850793731" watchObservedRunningTime="2026-01-20 04:05:36.255031699 +0000 UTC m=+982.854819548" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.280882 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ln6nh" podUID="a6903c19-3320-443c-8713-105a39a65527" containerName="ovn-controller" probeResult="failure" output=< Jan 20 04:05:36 crc kubenswrapper[4898]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 20 04:05:36 crc kubenswrapper[4898]: > Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.290812 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.301863 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9mdd5" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.523176 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ln6nh-config-p27kz"] Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.525012 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.527546 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.582109 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ln6nh-config-p27kz"] Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.647645 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-additional-scripts\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.647718 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run-ovn\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.647741 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-log-ovn\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.647903 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-scripts\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.648022 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.648098 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbcwq\" (UniqueName: \"kubernetes.io/projected/a2b58870-e14b-4394-a425-88659017b810-kube-api-access-pbcwq\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.749494 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-scripts\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.749570 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.749617 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbcwq\" (UniqueName: \"kubernetes.io/projected/a2b58870-e14b-4394-a425-88659017b810-kube-api-access-pbcwq\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.749702 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-additional-scripts\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.749737 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run-ovn\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.749758 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-log-ovn\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.750072 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-log-ovn\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.750767 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.750977 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run-ovn\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.751467 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-additional-scripts\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.752105 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-scripts\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.780323 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.801558 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbcwq\" (UniqueName: \"kubernetes.io/projected/a2b58870-e14b-4394-a425-88659017b810-kube-api-access-pbcwq\") pod \"ovn-controller-ln6nh-config-p27kz\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.852100 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:36 crc kubenswrapper[4898]: I0120 04:05:36.860803 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:37 crc kubenswrapper[4898]: I0120 04:05:37.043738 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z4nsj"] Jan 20 04:05:38 crc kubenswrapper[4898]: I0120 04:05:38.257754 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z4nsj" podUID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerName="registry-server" containerID="cri-o://ceb7e6c6a7863e3fb877e44dc1101d6c7db1b8fe0e3d7b409b2c30a5eb89f2d0" gracePeriod=2 Jan 20 04:05:39 crc kubenswrapper[4898]: I0120 04:05:39.279140 4898 generic.go:334] "Generic (PLEG): container finished" podID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerID="ceb7e6c6a7863e3fb877e44dc1101d6c7db1b8fe0e3d7b409b2c30a5eb89f2d0" exitCode=0 Jan 20 04:05:39 crc kubenswrapper[4898]: I0120 04:05:39.279205 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4nsj" event={"ID":"911ffe57-42e1-4ea9-96e0-3109c2223da3","Type":"ContainerDied","Data":"ceb7e6c6a7863e3fb877e44dc1101d6c7db1b8fe0e3d7b409b2c30a5eb89f2d0"} Jan 20 04:05:39 crc kubenswrapper[4898]: I0120 04:05:39.976495 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:05:39 crc kubenswrapper[4898]: I0120 04:05:39.976587 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:05:41 crc kubenswrapper[4898]: I0120 04:05:41.291000 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ln6nh" podUID="a6903c19-3320-443c-8713-105a39a65527" containerName="ovn-controller" probeResult="failure" output=< Jan 20 04:05:41 crc kubenswrapper[4898]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 20 04:05:41 crc kubenswrapper[4898]: > Jan 20 04:05:43 crc kubenswrapper[4898]: I0120 04:05:43.800569 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:43 crc kubenswrapper[4898]: I0120 04:05:43.816984 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b-etc-swift\") pod \"swift-storage-0\" (UID: \"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b\") " pod="openstack/swift-storage-0" Jan 20 04:05:43 crc kubenswrapper[4898]: I0120 04:05:43.862492 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.174015 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.210543 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.318047 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.330889 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tszzc\" (UniqueName: \"kubernetes.io/projected/911ffe57-42e1-4ea9-96e0-3109c2223da3-kube-api-access-tszzc\") pod \"911ffe57-42e1-4ea9-96e0-3109c2223da3\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.331053 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-catalog-content\") pod \"911ffe57-42e1-4ea9-96e0-3109c2223da3\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.331167 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-utilities\") pod \"911ffe57-42e1-4ea9-96e0-3109c2223da3\" (UID: \"911ffe57-42e1-4ea9-96e0-3109c2223da3\") " Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.332484 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-utilities" (OuterVolumeSpecName: "utilities") pod "911ffe57-42e1-4ea9-96e0-3109c2223da3" (UID: "911ffe57-42e1-4ea9-96e0-3109c2223da3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.346625 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911ffe57-42e1-4ea9-96e0-3109c2223da3-kube-api-access-tszzc" (OuterVolumeSpecName: "kube-api-access-tszzc") pod "911ffe57-42e1-4ea9-96e0-3109c2223da3" (UID: "911ffe57-42e1-4ea9-96e0-3109c2223da3"). InnerVolumeSpecName "kube-api-access-tszzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.396492 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4nsj" event={"ID":"911ffe57-42e1-4ea9-96e0-3109c2223da3","Type":"ContainerDied","Data":"b029e6a5c1455fd3eb009f114fdd492310a857fa2d2b1c0d002b07070f51d8e3"} Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.396539 4898 scope.go:117] "RemoveContainer" containerID="ceb7e6c6a7863e3fb877e44dc1101d6c7db1b8fe0e3d7b409b2c30a5eb89f2d0" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.396669 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4nsj" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.401280 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ln6nh-config-p27kz"] Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.431395 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz5lr"] Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.431985 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gz5lr" podUID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" containerName="registry-server" containerID="cri-o://c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7" gracePeriod=2 Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.432887 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.432917 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tszzc\" (UniqueName: \"kubernetes.io/projected/911ffe57-42e1-4ea9-96e0-3109c2223da3-kube-api-access-tszzc\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.480528 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5rcwc"] Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.486820 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.503720 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.519871 4898 scope.go:117] "RemoveContainer" containerID="c26420b85699b133bf687c84696b80c48f0fb888f119a7ed5e0dfa75a103edb9" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.538842 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "911ffe57-42e1-4ea9-96e0-3109c2223da3" (UID: "911ffe57-42e1-4ea9-96e0-3109c2223da3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.591952 4898 scope.go:117] "RemoveContainer" containerID="5bd46ecc51896a03ab13ab71d839f6a62ebaa90fc2304a65b2285b11f1c99eb0" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.618869 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 20 04:05:45 crc kubenswrapper[4898]: W0120 04:05:45.628407 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod311bb6c9_9ed4_4d8b_8b0f_839ecf2bfb5b.slice/crio-489ab26fe342abb5c9060315d93a6a2bf3591f8999df3ab816d4cea5d16ce81d WatchSource:0}: Error finding container 489ab26fe342abb5c9060315d93a6a2bf3591f8999df3ab816d4cea5d16ce81d: Status 404 returned error can't find the container with id 489ab26fe342abb5c9060315d93a6a2bf3591f8999df3ab816d4cea5d16ce81d Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.636303 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911ffe57-42e1-4ea9-96e0-3109c2223da3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.734738 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z4nsj"] Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.740969 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z4nsj"] Jan 20 04:05:45 crc kubenswrapper[4898]: I0120 04:05:45.968629 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.041493 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8f49\" (UniqueName: \"kubernetes.io/projected/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-kube-api-access-l8f49\") pod \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.041690 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-catalog-content\") pod \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.041749 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-utilities\") pod \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\" (UID: \"7121b2e8-f0f3-48e3-9db5-8ad947f628c6\") " Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.042905 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-utilities" (OuterVolumeSpecName: "utilities") pod "7121b2e8-f0f3-48e3-9db5-8ad947f628c6" (UID: "7121b2e8-f0f3-48e3-9db5-8ad947f628c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.055999 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-kube-api-access-l8f49" (OuterVolumeSpecName: "kube-api-access-l8f49") pod "7121b2e8-f0f3-48e3-9db5-8ad947f628c6" (UID: "7121b2e8-f0f3-48e3-9db5-8ad947f628c6"). InnerVolumeSpecName "kube-api-access-l8f49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.071455 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7121b2e8-f0f3-48e3-9db5-8ad947f628c6" (UID: "7121b2e8-f0f3-48e3-9db5-8ad947f628c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.143310 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.143338 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.143349 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8f49\" (UniqueName: \"kubernetes.io/projected/7121b2e8-f0f3-48e3-9db5-8ad947f628c6-kube-api-access-l8f49\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.264536 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ln6nh" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.406491 4898 generic.go:334] "Generic (PLEG): container finished" podID="48bf9549-2dda-49d0-8cb3-c45cab8929ef" containerID="f033c4886f2eb053792196ca3c7a5123289faceb757765b43c450c6cae456f03" exitCode=0 Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.406581 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5rcwc" event={"ID":"48bf9549-2dda-49d0-8cb3-c45cab8929ef","Type":"ContainerDied","Data":"f033c4886f2eb053792196ca3c7a5123289faceb757765b43c450c6cae456f03"} Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.406618 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5rcwc" event={"ID":"48bf9549-2dda-49d0-8cb3-c45cab8929ef","Type":"ContainerStarted","Data":"c94cece3cbdfc6ce13a14a870d569d28c5e2152fcdf74a9133de9dcec8bc9d23"} Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.409813 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-srv84" event={"ID":"befd057f-4068-49ce-8679-33b0b01fabfc","Type":"ContainerStarted","Data":"bbfac92806eb1c0ab61485d2fd6336ebf0bc06204df1d5770a601a828e23b6cf"} Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.414424 4898 generic.go:334] "Generic (PLEG): container finished" podID="a2b58870-e14b-4394-a425-88659017b810" containerID="a831c779bc2247efb5547b5672f144a3ca5bf94b1ecc5b282c7b86a59f27148b" exitCode=0 Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.414503 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ln6nh-config-p27kz" event={"ID":"a2b58870-e14b-4394-a425-88659017b810","Type":"ContainerDied","Data":"a831c779bc2247efb5547b5672f144a3ca5bf94b1ecc5b282c7b86a59f27148b"} Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.414533 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ln6nh-config-p27kz" event={"ID":"a2b58870-e14b-4394-a425-88659017b810","Type":"ContainerStarted","Data":"7c23db44d268ede76a9cf7ce14183f06a366807fb729ad2d61ee2371e313dbbc"} Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.415944 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"489ab26fe342abb5c9060315d93a6a2bf3591f8999df3ab816d4cea5d16ce81d"} Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.420982 4898 generic.go:334] "Generic (PLEG): container finished" podID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" containerID="c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7" exitCode=0 Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.421068 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz5lr" event={"ID":"7121b2e8-f0f3-48e3-9db5-8ad947f628c6","Type":"ContainerDied","Data":"c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7"} Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.421102 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz5lr" event={"ID":"7121b2e8-f0f3-48e3-9db5-8ad947f628c6","Type":"ContainerDied","Data":"48c4a7eeb9b3827170062c1b02f0cbb72ff42b95f79b1c6bde1c4e731f769489"} Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.421119 4898 scope.go:117] "RemoveContainer" containerID="c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.421218 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz5lr" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.447185 4898 scope.go:117] "RemoveContainer" containerID="e96b66fae2b118f263a00512eb3d068428666582aa404fe215a8cfa02f2b536b" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.469809 4898 scope.go:117] "RemoveContainer" containerID="ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.477029 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-srv84" podStartSLOduration=2.629345156 podStartE2EDuration="17.47701239s" podCreationTimestamp="2026-01-20 04:05:29 +0000 UTC" firstStartedPulling="2026-01-20 04:05:30.039467833 +0000 UTC m=+976.639255692" lastFinishedPulling="2026-01-20 04:05:44.887135057 +0000 UTC m=+991.486922926" observedRunningTime="2026-01-20 04:05:46.458312123 +0000 UTC m=+993.058099982" watchObservedRunningTime="2026-01-20 04:05:46.47701239 +0000 UTC m=+993.076800249" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.498036 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz5lr"] Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.504305 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz5lr"] Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.506418 4898 scope.go:117] "RemoveContainer" containerID="c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7" Jan 20 04:05:46 crc kubenswrapper[4898]: E0120 04:05:46.506778 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7\": container with ID starting with c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7 not found: ID does not exist" containerID="c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.506821 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7"} err="failed to get container status \"c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7\": rpc error: code = NotFound desc = could not find container \"c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7\": container with ID starting with c143e7470e367660b074e4b010b14200df5f99f5e618f401ed992e6d2f411ec7 not found: ID does not exist" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.507033 4898 scope.go:117] "RemoveContainer" containerID="e96b66fae2b118f263a00512eb3d068428666582aa404fe215a8cfa02f2b536b" Jan 20 04:05:46 crc kubenswrapper[4898]: E0120 04:05:46.507359 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96b66fae2b118f263a00512eb3d068428666582aa404fe215a8cfa02f2b536b\": container with ID starting with e96b66fae2b118f263a00512eb3d068428666582aa404fe215a8cfa02f2b536b not found: ID does not exist" containerID="e96b66fae2b118f263a00512eb3d068428666582aa404fe215a8cfa02f2b536b" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.507382 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96b66fae2b118f263a00512eb3d068428666582aa404fe215a8cfa02f2b536b"} err="failed to get container status \"e96b66fae2b118f263a00512eb3d068428666582aa404fe215a8cfa02f2b536b\": rpc error: code = NotFound desc = could not find container \"e96b66fae2b118f263a00512eb3d068428666582aa404fe215a8cfa02f2b536b\": container with ID starting with e96b66fae2b118f263a00512eb3d068428666582aa404fe215a8cfa02f2b536b not found: ID does not exist" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.507395 4898 scope.go:117] "RemoveContainer" containerID="ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e" Jan 20 04:05:46 crc kubenswrapper[4898]: E0120 04:05:46.507856 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e\": container with ID starting with ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e not found: ID does not exist" containerID="ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e" Jan 20 04:05:46 crc kubenswrapper[4898]: I0120 04:05:46.507895 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e"} err="failed to get container status \"ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e\": rpc error: code = NotFound desc = could not find container \"ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e\": container with ID starting with ec20937792ff9631824bbe3dd931966a13dd950fe1b8dfb831b811b23b35753e not found: ID does not exist" Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.430188 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"b1cf5c13acb7b9b6efd3ce734124ae928392f75f02a52cd4a86f262c212dc72c"} Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.430723 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"10f2de41015d568c9220640adc7c5c4f958d6142af55e202305c923f08c8f547"} Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.739081 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" path="/var/lib/kubelet/pods/7121b2e8-f0f3-48e3-9db5-8ad947f628c6/volumes" Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.740698 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911ffe57-42e1-4ea9-96e0-3109c2223da3" path="/var/lib/kubelet/pods/911ffe57-42e1-4ea9-96e0-3109c2223da3/volumes" Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.811635 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5rcwc" Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.852702 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.896911 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq6w2\" (UniqueName: \"kubernetes.io/projected/48bf9549-2dda-49d0-8cb3-c45cab8929ef-kube-api-access-lq6w2\") pod \"48bf9549-2dda-49d0-8cb3-c45cab8929ef\" (UID: \"48bf9549-2dda-49d0-8cb3-c45cab8929ef\") " Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.897029 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48bf9549-2dda-49d0-8cb3-c45cab8929ef-operator-scripts\") pod \"48bf9549-2dda-49d0-8cb3-c45cab8929ef\" (UID: \"48bf9549-2dda-49d0-8cb3-c45cab8929ef\") " Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.898375 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48bf9549-2dda-49d0-8cb3-c45cab8929ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48bf9549-2dda-49d0-8cb3-c45cab8929ef" (UID: "48bf9549-2dda-49d0-8cb3-c45cab8929ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.904156 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48bf9549-2dda-49d0-8cb3-c45cab8929ef-kube-api-access-lq6w2" (OuterVolumeSpecName: "kube-api-access-lq6w2") pod "48bf9549-2dda-49d0-8cb3-c45cab8929ef" (UID: "48bf9549-2dda-49d0-8cb3-c45cab8929ef"). InnerVolumeSpecName "kube-api-access-lq6w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.998333 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-additional-scripts\") pod \"a2b58870-e14b-4394-a425-88659017b810\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.998383 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-log-ovn\") pod \"a2b58870-e14b-4394-a425-88659017b810\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.998464 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run\") pod \"a2b58870-e14b-4394-a425-88659017b810\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.998483 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run-ovn\") pod \"a2b58870-e14b-4394-a425-88659017b810\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.998541 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbcwq\" (UniqueName: \"kubernetes.io/projected/a2b58870-e14b-4394-a425-88659017b810-kube-api-access-pbcwq\") pod \"a2b58870-e14b-4394-a425-88659017b810\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.998570 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-scripts\") pod \"a2b58870-e14b-4394-a425-88659017b810\" (UID: \"a2b58870-e14b-4394-a425-88659017b810\") " Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.998894 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48bf9549-2dda-49d0-8cb3-c45cab8929ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:47 crc kubenswrapper[4898]: I0120 04:05:47.998912 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq6w2\" (UniqueName: \"kubernetes.io/projected/48bf9549-2dda-49d0-8cb3-c45cab8929ef-kube-api-access-lq6w2\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:47.999746 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a2b58870-e14b-4394-a425-88659017b810" (UID: "a2b58870-e14b-4394-a425-88659017b810"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:47.999790 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a2b58870-e14b-4394-a425-88659017b810" (UID: "a2b58870-e14b-4394-a425-88659017b810"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:47.999813 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a2b58870-e14b-4394-a425-88659017b810" (UID: "a2b58870-e14b-4394-a425-88659017b810"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:47.999798 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run" (OuterVolumeSpecName: "var-run") pod "a2b58870-e14b-4394-a425-88659017b810" (UID: "a2b58870-e14b-4394-a425-88659017b810"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:47.999991 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-scripts" (OuterVolumeSpecName: "scripts") pod "a2b58870-e14b-4394-a425-88659017b810" (UID: "a2b58870-e14b-4394-a425-88659017b810"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.003392 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b58870-e14b-4394-a425-88659017b810-kube-api-access-pbcwq" (OuterVolumeSpecName: "kube-api-access-pbcwq") pod "a2b58870-e14b-4394-a425-88659017b810" (UID: "a2b58870-e14b-4394-a425-88659017b810"). InnerVolumeSpecName "kube-api-access-pbcwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.101372 4898 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.101407 4898 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.101417 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbcwq\" (UniqueName: \"kubernetes.io/projected/a2b58870-e14b-4394-a425-88659017b810-kube-api-access-pbcwq\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.101427 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.101446 4898 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b58870-e14b-4394-a425-88659017b810-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.101454 4898 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a2b58870-e14b-4394-a425-88659017b810-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.442086 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ln6nh-config-p27kz" event={"ID":"a2b58870-e14b-4394-a425-88659017b810","Type":"ContainerDied","Data":"7c23db44d268ede76a9cf7ce14183f06a366807fb729ad2d61ee2371e313dbbc"} Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.442467 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c23db44d268ede76a9cf7ce14183f06a366807fb729ad2d61ee2371e313dbbc" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.442529 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ln6nh-config-p27kz" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.455127 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"ef20beef40694e5055f271477dbe2c05054eae248d956c837adad3a515053dcb"} Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.455190 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"0549cff3f870e935534ce85fe5426e7b7f2c9b50645e4de3979d398d782a8cfb"} Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.457191 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5rcwc" event={"ID":"48bf9549-2dda-49d0-8cb3-c45cab8929ef","Type":"ContainerDied","Data":"c94cece3cbdfc6ce13a14a870d569d28c5e2152fcdf74a9133de9dcec8bc9d23"} Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.457230 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c94cece3cbdfc6ce13a14a870d569d28c5e2152fcdf74a9133de9dcec8bc9d23" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.457256 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5rcwc" Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.948501 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ln6nh-config-p27kz"] Jan 20 04:05:48 crc kubenswrapper[4898]: I0120 04:05:48.952827 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ln6nh-config-p27kz"] Jan 20 04:05:49 crc kubenswrapper[4898]: I0120 04:05:49.465850 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"d2bb8ced5788ffc157680f1fad67b1689f9d33a1b6e68c09b5ce0bd0e357e0f1"} Jan 20 04:05:49 crc kubenswrapper[4898]: I0120 04:05:49.730282 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b58870-e14b-4394-a425-88659017b810" path="/var/lib/kubelet/pods/a2b58870-e14b-4394-a425-88659017b810/volumes" Jan 20 04:05:50 crc kubenswrapper[4898]: I0120 04:05:50.478682 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"adc5b2c6b0df1598fbb00a1f49565e8043021e7329a1b171b07442d3f7528a72"} Jan 20 04:05:50 crc kubenswrapper[4898]: I0120 04:05:50.478986 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"b71a3e94f4f5bae20be1fabda8839e5cfad5fcb93e06c44450de9b3745308bb1"} Jan 20 04:05:50 crc kubenswrapper[4898]: I0120 04:05:50.479000 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"d49b6fbca9fd6b59ac2a3cca3343dd788796442d2576876da994b65bea420224"} Jan 20 04:05:51 crc kubenswrapper[4898]: I0120 04:05:51.498426 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"4c5a34f62e035dccb9a6afe6747c5402c8d7cdae68fe0e468cafec825655117a"} Jan 20 04:05:51 crc kubenswrapper[4898]: I0120 04:05:51.499047 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"3d3f49827d061847ac9697465da4ffcbfacc554b318835017e9528b094609eb7"} Jan 20 04:05:51 crc kubenswrapper[4898]: I0120 04:05:51.901164 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5rcwc"] Jan 20 04:05:51 crc kubenswrapper[4898]: I0120 04:05:51.906404 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5rcwc"] Jan 20 04:05:52 crc kubenswrapper[4898]: I0120 04:05:52.514640 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"3c65fde44b5a6f7664ae5662edba9902caa2bd64c10f36ebfa1602ec2b43d008"} Jan 20 04:05:52 crc kubenswrapper[4898]: I0120 04:05:52.514951 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"542e053ba941d5306d55c22d162bbf683b5326720c48a20cd032f6bb22a7007e"} Jan 20 04:05:52 crc kubenswrapper[4898]: I0120 04:05:52.514964 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"abc64c3a82bd0363de0e31b7ba82ccd3a5a0eec73fd256fdca99efa6f44810d0"} Jan 20 04:05:52 crc kubenswrapper[4898]: I0120 04:05:52.514972 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"e6d406b13e901d72524199d8479f1fee79e7faf8485c0c8a2bdf9949cfeb477d"} Jan 20 04:05:52 crc kubenswrapper[4898]: I0120 04:05:52.516956 4898 generic.go:334] "Generic (PLEG): container finished" podID="befd057f-4068-49ce-8679-33b0b01fabfc" containerID="bbfac92806eb1c0ab61485d2fd6336ebf0bc06204df1d5770a601a828e23b6cf" exitCode=0 Jan 20 04:05:52 crc kubenswrapper[4898]: I0120 04:05:52.516985 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-srv84" event={"ID":"befd057f-4068-49ce-8679-33b0b01fabfc","Type":"ContainerDied","Data":"bbfac92806eb1c0ab61485d2fd6336ebf0bc06204df1d5770a601a828e23b6cf"} Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.529831 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b","Type":"ContainerStarted","Data":"6912492078b790ab4a12a3598441ae1a022a3c80d8e80d65a5f7d06f6dc1c9a4"} Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.571494 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.288346079 podStartE2EDuration="42.571469159s" podCreationTimestamp="2026-01-20 04:05:11 +0000 UTC" firstStartedPulling="2026-01-20 04:05:45.639455608 +0000 UTC m=+992.239243467" lastFinishedPulling="2026-01-20 04:05:50.922578688 +0000 UTC m=+997.522366547" observedRunningTime="2026-01-20 04:05:53.570157408 +0000 UTC m=+1000.169945267" watchObservedRunningTime="2026-01-20 04:05:53.571469159 +0000 UTC m=+1000.171257018" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.761332 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48bf9549-2dda-49d0-8cb3-c45cab8929ef" path="/var/lib/kubelet/pods/48bf9549-2dda-49d0-8cb3-c45cab8929ef/volumes" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.889202 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-wspvc"] Jan 20 04:05:53 crc kubenswrapper[4898]: E0120 04:05:53.889897 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" containerName="extract-content" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.889913 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" containerName="extract-content" Jan 20 04:05:53 crc kubenswrapper[4898]: E0120 04:05:53.889925 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bf9549-2dda-49d0-8cb3-c45cab8929ef" containerName="mariadb-account-create-update" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.889932 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bf9549-2dda-49d0-8cb3-c45cab8929ef" containerName="mariadb-account-create-update" Jan 20 04:05:53 crc kubenswrapper[4898]: E0120 04:05:53.889944 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerName="extract-content" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.889952 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerName="extract-content" Jan 20 04:05:53 crc kubenswrapper[4898]: E0120 04:05:53.889964 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b58870-e14b-4394-a425-88659017b810" containerName="ovn-config" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.889971 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b58870-e14b-4394-a425-88659017b810" containerName="ovn-config" Jan 20 04:05:53 crc kubenswrapper[4898]: E0120 04:05:53.889993 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerName="registry-server" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.889998 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerName="registry-server" Jan 20 04:05:53 crc kubenswrapper[4898]: E0120 04:05:53.890014 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" containerName="registry-server" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.890020 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" containerName="registry-server" Jan 20 04:05:53 crc kubenswrapper[4898]: E0120 04:05:53.890033 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" containerName="extract-utilities" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.890040 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" containerName="extract-utilities" Jan 20 04:05:53 crc kubenswrapper[4898]: E0120 04:05:53.890053 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerName="extract-utilities" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.890059 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerName="extract-utilities" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.890200 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bf9549-2dda-49d0-8cb3-c45cab8929ef" containerName="mariadb-account-create-update" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.890213 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="911ffe57-42e1-4ea9-96e0-3109c2223da3" containerName="registry-server" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.890233 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="7121b2e8-f0f3-48e3-9db5-8ad947f628c6" containerName="registry-server" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.890246 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b58870-e14b-4394-a425-88659017b810" containerName="ovn-config" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.891188 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.892908 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.904345 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-wspvc"] Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.943055 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-srv84" Jan 20 04:05:53 crc kubenswrapper[4898]: I0120 04:05:53.999898 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:53.999962 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.000004 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2l66\" (UniqueName: \"kubernetes.io/projected/fdb64d23-23e5-43c0-a738-50cd17f6f03f-kube-api-access-c2l66\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.000039 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-config\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.000058 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.000093 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.100861 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-config-data\") pod \"befd057f-4068-49ce-8679-33b0b01fabfc\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.101007 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8phm\" (UniqueName: \"kubernetes.io/projected/befd057f-4068-49ce-8679-33b0b01fabfc-kube-api-access-w8phm\") pod \"befd057f-4068-49ce-8679-33b0b01fabfc\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.101109 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-combined-ca-bundle\") pod \"befd057f-4068-49ce-8679-33b0b01fabfc\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.101168 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-db-sync-config-data\") pod \"befd057f-4068-49ce-8679-33b0b01fabfc\" (UID: \"befd057f-4068-49ce-8679-33b0b01fabfc\") " Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.101493 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2l66\" (UniqueName: \"kubernetes.io/projected/fdb64d23-23e5-43c0-a738-50cd17f6f03f-kube-api-access-c2l66\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.101649 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-config\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.103035 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-config\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.103150 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.104218 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.110779 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.104423 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.110947 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.111610 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.112625 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.111748 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.117369 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befd057f-4068-49ce-8679-33b0b01fabfc-kube-api-access-w8phm" (OuterVolumeSpecName: "kube-api-access-w8phm") pod "befd057f-4068-49ce-8679-33b0b01fabfc" (UID: "befd057f-4068-49ce-8679-33b0b01fabfc"). InnerVolumeSpecName "kube-api-access-w8phm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.118879 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "befd057f-4068-49ce-8679-33b0b01fabfc" (UID: "befd057f-4068-49ce-8679-33b0b01fabfc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.119926 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2l66\" (UniqueName: \"kubernetes.io/projected/fdb64d23-23e5-43c0-a738-50cd17f6f03f-kube-api-access-c2l66\") pod \"dnsmasq-dns-5c79d794d7-wspvc\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.138627 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "befd057f-4068-49ce-8679-33b0b01fabfc" (UID: "befd057f-4068-49ce-8679-33b0b01fabfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.147299 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-config-data" (OuterVolumeSpecName: "config-data") pod "befd057f-4068-49ce-8679-33b0b01fabfc" (UID: "befd057f-4068-49ce-8679-33b0b01fabfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.214129 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.214154 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8phm\" (UniqueName: \"kubernetes.io/projected/befd057f-4068-49ce-8679-33b0b01fabfc-kube-api-access-w8phm\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.214166 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.214175 4898 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/befd057f-4068-49ce-8679-33b0b01fabfc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.252907 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.541512 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-srv84" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.541515 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-srv84" event={"ID":"befd057f-4068-49ce-8679-33b0b01fabfc","Type":"ContainerDied","Data":"a70c6d301d177dee2dc9998f37b1261642f078a0ebbb37c0cc4ff313c8f5d470"} Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.541589 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70c6d301d177dee2dc9998f37b1261642f078a0ebbb37c0cc4ff313c8f5d470" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.715998 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-wspvc"] Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.913725 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-wspvc"] Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.973544 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w46nx"] Jan 20 04:05:54 crc kubenswrapper[4898]: E0120 04:05:54.973852 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befd057f-4068-49ce-8679-33b0b01fabfc" containerName="glance-db-sync" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.973867 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="befd057f-4068-49ce-8679-33b0b01fabfc" containerName="glance-db-sync" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.974024 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="befd057f-4068-49ce-8679-33b0b01fabfc" containerName="glance-db-sync" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.984807 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:54 crc kubenswrapper[4898]: I0120 04:05:54.994362 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w46nx"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.134211 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.134547 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zktv\" (UniqueName: \"kubernetes.io/projected/393c968e-aaea-4b5f-86ba-44ffa221b98e-kube-api-access-4zktv\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.134613 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.134654 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.134673 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-config\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.134688 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.205327 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.235661 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.235741 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zktv\" (UniqueName: \"kubernetes.io/projected/393c968e-aaea-4b5f-86ba-44ffa221b98e-kube-api-access-4zktv\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.235800 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.235844 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.235863 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-config\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.235879 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.236788 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.236789 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.237399 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.237976 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.238025 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-config\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.257724 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zktv\" (UniqueName: \"kubernetes.io/projected/393c968e-aaea-4b5f-86ba-44ffa221b98e-kube-api-access-4zktv\") pod \"dnsmasq-dns-5f59b8f679-w46nx\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.315237 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.554659 4898 generic.go:334] "Generic (PLEG): container finished" podID="fdb64d23-23e5-43c0-a738-50cd17f6f03f" containerID="d6c72e6a2f8741ece8b93b53ae81a757b1a9e00050b05bcc2cbcdb16878cdbfa" exitCode=0 Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.554933 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" event={"ID":"fdb64d23-23e5-43c0-a738-50cd17f6f03f","Type":"ContainerDied","Data":"d6c72e6a2f8741ece8b93b53ae81a757b1a9e00050b05bcc2cbcdb16878cdbfa"} Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.554957 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" event={"ID":"fdb64d23-23e5-43c0-a738-50cd17f6f03f","Type":"ContainerStarted","Data":"0a0a853a7309c05b091e5708adce42a168d056d1212fdc13f166dc3b03a4a1d5"} Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.633494 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-5wm2t"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.634566 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5wm2t" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.643052 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5wm2t"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.710634 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ng4j5"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.717902 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ng4j5" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.739884 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ng4j5"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.744970 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhgl2\" (UniqueName: \"kubernetes.io/projected/89e4d258-a008-4a05-ae27-1d5c03654aa2-kube-api-access-jhgl2\") pod \"heat-db-create-5wm2t\" (UID: \"89e4d258-a008-4a05-ae27-1d5c03654aa2\") " pod="openstack/heat-db-create-5wm2t" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.745024 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89e4d258-a008-4a05-ae27-1d5c03654aa2-operator-scripts\") pod \"heat-db-create-5wm2t\" (UID: \"89e4d258-a008-4a05-ae27-1d5c03654aa2\") " pod="openstack/heat-db-create-5wm2t" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.749629 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-206e-account-create-update-jlllb"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.750721 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-206e-account-create-update-jlllb" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.760915 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.773771 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-206e-account-create-update-jlllb"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.865148 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/710b53e5-5753-4b12-b02e-516fc4b2ed8f-operator-scripts\") pod \"heat-206e-account-create-update-jlllb\" (UID: \"710b53e5-5753-4b12-b02e-516fc4b2ed8f\") " pod="openstack/heat-206e-account-create-update-jlllb" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.868806 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-operator-scripts\") pod \"cinder-db-create-ng4j5\" (UID: \"2a8a45e8-91bc-40b8-9a92-b8f82709a03a\") " pod="openstack/cinder-db-create-ng4j5" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.868876 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt9qf\" (UniqueName: \"kubernetes.io/projected/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-kube-api-access-qt9qf\") pod \"cinder-db-create-ng4j5\" (UID: \"2a8a45e8-91bc-40b8-9a92-b8f82709a03a\") " pod="openstack/cinder-db-create-ng4j5" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.868954 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhgl2\" (UniqueName: \"kubernetes.io/projected/89e4d258-a008-4a05-ae27-1d5c03654aa2-kube-api-access-jhgl2\") pod \"heat-db-create-5wm2t\" (UID: \"89e4d258-a008-4a05-ae27-1d5c03654aa2\") " pod="openstack/heat-db-create-5wm2t" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.869049 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r559d\" (UniqueName: \"kubernetes.io/projected/710b53e5-5753-4b12-b02e-516fc4b2ed8f-kube-api-access-r559d\") pod \"heat-206e-account-create-update-jlllb\" (UID: \"710b53e5-5753-4b12-b02e-516fc4b2ed8f\") " pod="openstack/heat-206e-account-create-update-jlllb" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.869085 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89e4d258-a008-4a05-ae27-1d5c03654aa2-operator-scripts\") pod \"heat-db-create-5wm2t\" (UID: \"89e4d258-a008-4a05-ae27-1d5c03654aa2\") " pod="openstack/heat-db-create-5wm2t" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.870400 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89e4d258-a008-4a05-ae27-1d5c03654aa2-operator-scripts\") pod \"heat-db-create-5wm2t\" (UID: \"89e4d258-a008-4a05-ae27-1d5c03654aa2\") " pod="openstack/heat-db-create-5wm2t" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.882627 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-nrzcw"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.884212 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nrzcw" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.922080 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nrzcw"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.936145 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhgl2\" (UniqueName: \"kubernetes.io/projected/89e4d258-a008-4a05-ae27-1d5c03654aa2-kube-api-access-jhgl2\") pod \"heat-db-create-5wm2t\" (UID: \"89e4d258-a008-4a05-ae27-1d5c03654aa2\") " pod="openstack/heat-db-create-5wm2t" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.942951 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-67f4-account-create-update-xwdx5"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.944219 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-67f4-account-create-update-xwdx5" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.957769 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w46nx"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.957901 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.958516 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5wm2t" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.964773 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-67f4-account-create-update-xwdx5"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.970613 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-operator-scripts\") pod \"barbican-db-create-nrzcw\" (UID: \"8b590651-4b56-4cf7-8374-c8fe0c8b26e5\") " pod="openstack/barbican-db-create-nrzcw" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.970709 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/710b53e5-5753-4b12-b02e-516fc4b2ed8f-operator-scripts\") pod \"heat-206e-account-create-update-jlllb\" (UID: \"710b53e5-5753-4b12-b02e-516fc4b2ed8f\") " pod="openstack/heat-206e-account-create-update-jlllb" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.970756 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-operator-scripts\") pod \"cinder-db-create-ng4j5\" (UID: \"2a8a45e8-91bc-40b8-9a92-b8f82709a03a\") " pod="openstack/cinder-db-create-ng4j5" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.970782 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzqrw\" (UniqueName: \"kubernetes.io/projected/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-kube-api-access-rzqrw\") pod \"barbican-db-create-nrzcw\" (UID: \"8b590651-4b56-4cf7-8374-c8fe0c8b26e5\") " pod="openstack/barbican-db-create-nrzcw" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.970811 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt9qf\" (UniqueName: \"kubernetes.io/projected/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-kube-api-access-qt9qf\") pod \"cinder-db-create-ng4j5\" (UID: \"2a8a45e8-91bc-40b8-9a92-b8f82709a03a\") " pod="openstack/cinder-db-create-ng4j5" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.970860 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r559d\" (UniqueName: \"kubernetes.io/projected/710b53e5-5753-4b12-b02e-516fc4b2ed8f-kube-api-access-r559d\") pod \"heat-206e-account-create-update-jlllb\" (UID: \"710b53e5-5753-4b12-b02e-516fc4b2ed8f\") " pod="openstack/heat-206e-account-create-update-jlllb" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.971762 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/710b53e5-5753-4b12-b02e-516fc4b2ed8f-operator-scripts\") pod \"heat-206e-account-create-update-jlllb\" (UID: \"710b53e5-5753-4b12-b02e-516fc4b2ed8f\") " pod="openstack/heat-206e-account-create-update-jlllb" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.972247 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-operator-scripts\") pod \"cinder-db-create-ng4j5\" (UID: \"2a8a45e8-91bc-40b8-9a92-b8f82709a03a\") " pod="openstack/cinder-db-create-ng4j5" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.980534 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d158-account-create-update-nfbsk"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.981636 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d158-account-create-update-nfbsk" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.992719 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d158-account-create-update-nfbsk"] Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.997111 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 20 04:05:55 crc kubenswrapper[4898]: I0120 04:05:55.999397 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt9qf\" (UniqueName: \"kubernetes.io/projected/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-kube-api-access-qt9qf\") pod \"cinder-db-create-ng4j5\" (UID: \"2a8a45e8-91bc-40b8-9a92-b8f82709a03a\") " pod="openstack/cinder-db-create-ng4j5" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.004499 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-7gkfn"] Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.005617 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.008255 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r559d\" (UniqueName: \"kubernetes.io/projected/710b53e5-5753-4b12-b02e-516fc4b2ed8f-kube-api-access-r559d\") pod \"heat-206e-account-create-update-jlllb\" (UID: \"710b53e5-5753-4b12-b02e-516fc4b2ed8f\") " pod="openstack/heat-206e-account-create-update-jlllb" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.012970 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7gkfn"] Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.026002 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.026647 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.026764 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t7cgs" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.026881 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.046859 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bwv6k"] Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.047968 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bwv6k" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.049386 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ng4j5" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.066498 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bwv6k"] Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.072179 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzqrw\" (UniqueName: \"kubernetes.io/projected/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-kube-api-access-rzqrw\") pod \"barbican-db-create-nrzcw\" (UID: \"8b590651-4b56-4cf7-8374-c8fe0c8b26e5\") " pod="openstack/barbican-db-create-nrzcw" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.072254 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-config-data\") pod \"keystone-db-sync-7gkfn\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.072295 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txw9q\" (UniqueName: \"kubernetes.io/projected/10755ec2-23ba-4fea-852a-546e494f98df-kube-api-access-txw9q\") pod \"keystone-db-sync-7gkfn\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.072314 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcpd9\" (UniqueName: \"kubernetes.io/projected/8e3e17d9-6103-4600-8159-178bcefd2c84-kube-api-access-bcpd9\") pod \"barbican-67f4-account-create-update-xwdx5\" (UID: \"8e3e17d9-6103-4600-8159-178bcefd2c84\") " pod="openstack/barbican-67f4-account-create-update-xwdx5" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.072339 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-operator-scripts\") pod \"barbican-db-create-nrzcw\" (UID: \"8b590651-4b56-4cf7-8374-c8fe0c8b26e5\") " pod="openstack/barbican-db-create-nrzcw" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.072366 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68w4j\" (UniqueName: \"kubernetes.io/projected/edb0f751-8705-49a0-9d9a-67633e2f0379-kube-api-access-68w4j\") pod \"cinder-d158-account-create-update-nfbsk\" (UID: \"edb0f751-8705-49a0-9d9a-67633e2f0379\") " pod="openstack/cinder-d158-account-create-update-nfbsk" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.072396 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-combined-ca-bundle\") pod \"keystone-db-sync-7gkfn\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.072425 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb0f751-8705-49a0-9d9a-67633e2f0379-operator-scripts\") pod \"cinder-d158-account-create-update-nfbsk\" (UID: \"edb0f751-8705-49a0-9d9a-67633e2f0379\") " pod="openstack/cinder-d158-account-create-update-nfbsk" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.072595 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e3e17d9-6103-4600-8159-178bcefd2c84-operator-scripts\") pod \"barbican-67f4-account-create-update-xwdx5\" (UID: \"8e3e17d9-6103-4600-8159-178bcefd2c84\") " pod="openstack/barbican-67f4-account-create-update-xwdx5" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.073308 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-operator-scripts\") pod \"barbican-db-create-nrzcw\" (UID: \"8b590651-4b56-4cf7-8374-c8fe0c8b26e5\") " pod="openstack/barbican-db-create-nrzcw" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.112504 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzqrw\" (UniqueName: \"kubernetes.io/projected/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-kube-api-access-rzqrw\") pod \"barbican-db-create-nrzcw\" (UID: \"8b590651-4b56-4cf7-8374-c8fe0c8b26e5\") " pod="openstack/barbican-db-create-nrzcw" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.132402 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-206e-account-create-update-jlllb" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.174035 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-config-data\") pod \"keystone-db-sync-7gkfn\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.174078 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e099701-df46-48de-883e-65d209f81af0-operator-scripts\") pod \"neutron-db-create-bwv6k\" (UID: \"2e099701-df46-48de-883e-65d209f81af0\") " pod="openstack/neutron-db-create-bwv6k" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.174112 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txw9q\" (UniqueName: \"kubernetes.io/projected/10755ec2-23ba-4fea-852a-546e494f98df-kube-api-access-txw9q\") pod \"keystone-db-sync-7gkfn\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.174131 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcpd9\" (UniqueName: \"kubernetes.io/projected/8e3e17d9-6103-4600-8159-178bcefd2c84-kube-api-access-bcpd9\") pod \"barbican-67f4-account-create-update-xwdx5\" (UID: \"8e3e17d9-6103-4600-8159-178bcefd2c84\") " pod="openstack/barbican-67f4-account-create-update-xwdx5" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.174172 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68w4j\" (UniqueName: \"kubernetes.io/projected/edb0f751-8705-49a0-9d9a-67633e2f0379-kube-api-access-68w4j\") pod \"cinder-d158-account-create-update-nfbsk\" (UID: \"edb0f751-8705-49a0-9d9a-67633e2f0379\") " pod="openstack/cinder-d158-account-create-update-nfbsk" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.174203 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-combined-ca-bundle\") pod \"keystone-db-sync-7gkfn\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.174232 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb0f751-8705-49a0-9d9a-67633e2f0379-operator-scripts\") pod \"cinder-d158-account-create-update-nfbsk\" (UID: \"edb0f751-8705-49a0-9d9a-67633e2f0379\") " pod="openstack/cinder-d158-account-create-update-nfbsk" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.174260 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e3e17d9-6103-4600-8159-178bcefd2c84-operator-scripts\") pod \"barbican-67f4-account-create-update-xwdx5\" (UID: \"8e3e17d9-6103-4600-8159-178bcefd2c84\") " pod="openstack/barbican-67f4-account-create-update-xwdx5" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.174280 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6dkx\" (UniqueName: \"kubernetes.io/projected/2e099701-df46-48de-883e-65d209f81af0-kube-api-access-j6dkx\") pod \"neutron-db-create-bwv6k\" (UID: \"2e099701-df46-48de-883e-65d209f81af0\") " pod="openstack/neutron-db-create-bwv6k" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.176380 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb0f751-8705-49a0-9d9a-67633e2f0379-operator-scripts\") pod \"cinder-d158-account-create-update-nfbsk\" (UID: \"edb0f751-8705-49a0-9d9a-67633e2f0379\") " pod="openstack/cinder-d158-account-create-update-nfbsk" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.176917 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e3e17d9-6103-4600-8159-178bcefd2c84-operator-scripts\") pod \"barbican-67f4-account-create-update-xwdx5\" (UID: \"8e3e17d9-6103-4600-8159-178bcefd2c84\") " pod="openstack/barbican-67f4-account-create-update-xwdx5" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.180426 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-config-data\") pod \"keystone-db-sync-7gkfn\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.181773 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-combined-ca-bundle\") pod \"keystone-db-sync-7gkfn\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.196160 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcpd9\" (UniqueName: \"kubernetes.io/projected/8e3e17d9-6103-4600-8159-178bcefd2c84-kube-api-access-bcpd9\") pod \"barbican-67f4-account-create-update-xwdx5\" (UID: \"8e3e17d9-6103-4600-8159-178bcefd2c84\") " pod="openstack/barbican-67f4-account-create-update-xwdx5" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.203999 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txw9q\" (UniqueName: \"kubernetes.io/projected/10755ec2-23ba-4fea-852a-546e494f98df-kube-api-access-txw9q\") pod \"keystone-db-sync-7gkfn\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.210583 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68w4j\" (UniqueName: \"kubernetes.io/projected/edb0f751-8705-49a0-9d9a-67633e2f0379-kube-api-access-68w4j\") pod \"cinder-d158-account-create-update-nfbsk\" (UID: \"edb0f751-8705-49a0-9d9a-67633e2f0379\") " pod="openstack/cinder-d158-account-create-update-nfbsk" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.226351 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d299-account-create-update-tm2k8"] Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.227808 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d299-account-create-update-tm2k8" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.234026 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.246842 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d299-account-create-update-tm2k8"] Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.291734 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6dkx\" (UniqueName: \"kubernetes.io/projected/2e099701-df46-48de-883e-65d209f81af0-kube-api-access-j6dkx\") pod \"neutron-db-create-bwv6k\" (UID: \"2e099701-df46-48de-883e-65d209f81af0\") " pod="openstack/neutron-db-create-bwv6k" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.292118 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c91d56-5dc8-4607-aad0-85214357b977-operator-scripts\") pod \"neutron-d299-account-create-update-tm2k8\" (UID: \"67c91d56-5dc8-4607-aad0-85214357b977\") " pod="openstack/neutron-d299-account-create-update-tm2k8" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.292241 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e099701-df46-48de-883e-65d209f81af0-operator-scripts\") pod \"neutron-db-create-bwv6k\" (UID: \"2e099701-df46-48de-883e-65d209f81af0\") " pod="openstack/neutron-db-create-bwv6k" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.292341 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b7qg\" (UniqueName: \"kubernetes.io/projected/67c91d56-5dc8-4607-aad0-85214357b977-kube-api-access-4b7qg\") pod \"neutron-d299-account-create-update-tm2k8\" (UID: \"67c91d56-5dc8-4607-aad0-85214357b977\") " pod="openstack/neutron-d299-account-create-update-tm2k8" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.294824 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e099701-df46-48de-883e-65d209f81af0-operator-scripts\") pod \"neutron-db-create-bwv6k\" (UID: \"2e099701-df46-48de-883e-65d209f81af0\") " pod="openstack/neutron-db-create-bwv6k" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.312658 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6dkx\" (UniqueName: \"kubernetes.io/projected/2e099701-df46-48de-883e-65d209f81af0-kube-api-access-j6dkx\") pod \"neutron-db-create-bwv6k\" (UID: \"2e099701-df46-48de-883e-65d209f81af0\") " pod="openstack/neutron-db-create-bwv6k" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.392046 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nrzcw" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.393401 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c91d56-5dc8-4607-aad0-85214357b977-operator-scripts\") pod \"neutron-d299-account-create-update-tm2k8\" (UID: \"67c91d56-5dc8-4607-aad0-85214357b977\") " pod="openstack/neutron-d299-account-create-update-tm2k8" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.394354 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c91d56-5dc8-4607-aad0-85214357b977-operator-scripts\") pod \"neutron-d299-account-create-update-tm2k8\" (UID: \"67c91d56-5dc8-4607-aad0-85214357b977\") " pod="openstack/neutron-d299-account-create-update-tm2k8" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.394494 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b7qg\" (UniqueName: \"kubernetes.io/projected/67c91d56-5dc8-4607-aad0-85214357b977-kube-api-access-4b7qg\") pod \"neutron-d299-account-create-update-tm2k8\" (UID: \"67c91d56-5dc8-4607-aad0-85214357b977\") " pod="openstack/neutron-d299-account-create-update-tm2k8" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.413622 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b7qg\" (UniqueName: \"kubernetes.io/projected/67c91d56-5dc8-4607-aad0-85214357b977-kube-api-access-4b7qg\") pod \"neutron-d299-account-create-update-tm2k8\" (UID: \"67c91d56-5dc8-4607-aad0-85214357b977\") " pod="openstack/neutron-d299-account-create-update-tm2k8" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.451662 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-67f4-account-create-update-xwdx5" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.464708 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d158-account-create-update-nfbsk" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.486912 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.503093 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bwv6k" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.516358 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.550690 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d299-account-create-update-tm2k8" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.563505 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" event={"ID":"fdb64d23-23e5-43c0-a738-50cd17f6f03f","Type":"ContainerDied","Data":"0a0a853a7309c05b091e5708adce42a168d056d1212fdc13f166dc3b03a4a1d5"} Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.563550 4898 scope.go:117] "RemoveContainer" containerID="d6c72e6a2f8741ece8b93b53ae81a757b1a9e00050b05bcc2cbcdb16878cdbfa" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.563649 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-wspvc" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.566133 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" event={"ID":"393c968e-aaea-4b5f-86ba-44ffa221b98e","Type":"ContainerStarted","Data":"3d170c4930811f750bfcde24f39f24e61170429d5cd542004f9071d09e7f627f"} Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.597145 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2l66\" (UniqueName: \"kubernetes.io/projected/fdb64d23-23e5-43c0-a738-50cd17f6f03f-kube-api-access-c2l66\") pod \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.597194 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-swift-storage-0\") pod \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.597257 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-config\") pod \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.597450 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-nb\") pod \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.597502 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-sb\") pod \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.597521 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-svc\") pod \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\" (UID: \"fdb64d23-23e5-43c0-a738-50cd17f6f03f\") " Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.610503 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb64d23-23e5-43c0-a738-50cd17f6f03f-kube-api-access-c2l66" (OuterVolumeSpecName: "kube-api-access-c2l66") pod "fdb64d23-23e5-43c0-a738-50cd17f6f03f" (UID: "fdb64d23-23e5-43c0-a738-50cd17f6f03f"). InnerVolumeSpecName "kube-api-access-c2l66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.651354 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdb64d23-23e5-43c0-a738-50cd17f6f03f" (UID: "fdb64d23-23e5-43c0-a738-50cd17f6f03f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.658541 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdb64d23-23e5-43c0-a738-50cd17f6f03f" (UID: "fdb64d23-23e5-43c0-a738-50cd17f6f03f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.659708 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdb64d23-23e5-43c0-a738-50cd17f6f03f" (UID: "fdb64d23-23e5-43c0-a738-50cd17f6f03f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.662044 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fdb64d23-23e5-43c0-a738-50cd17f6f03f" (UID: "fdb64d23-23e5-43c0-a738-50cd17f6f03f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.666583 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-config" (OuterVolumeSpecName: "config") pod "fdb64d23-23e5-43c0-a738-50cd17f6f03f" (UID: "fdb64d23-23e5-43c0-a738-50cd17f6f03f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.699367 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2l66\" (UniqueName: \"kubernetes.io/projected/fdb64d23-23e5-43c0-a738-50cd17f6f03f-kube-api-access-c2l66\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.699393 4898 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.699404 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.699413 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.699422 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.699441 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdb64d23-23e5-43c0-a738-50cd17f6f03f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.918073 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kcrtx"] Jan 20 04:05:56 crc kubenswrapper[4898]: E0120 04:05:56.918456 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb64d23-23e5-43c0-a738-50cd17f6f03f" containerName="init" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.918471 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb64d23-23e5-43c0-a738-50cd17f6f03f" containerName="init" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.918659 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb64d23-23e5-43c0-a738-50cd17f6f03f" containerName="init" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.919171 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kcrtx" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.929250 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.952649 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kcrtx"] Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.971907 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-wspvc"] Jan 20 04:05:56 crc kubenswrapper[4898]: I0120 04:05:56.983674 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-wspvc"] Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.004287 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b14bc956-d554-4fef-be24-28d68be49afe-operator-scripts\") pod \"root-account-create-update-kcrtx\" (UID: \"b14bc956-d554-4fef-be24-28d68be49afe\") " pod="openstack/root-account-create-update-kcrtx" Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.004415 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzgl7\" (UniqueName: \"kubernetes.io/projected/b14bc956-d554-4fef-be24-28d68be49afe-kube-api-access-wzgl7\") pod \"root-account-create-update-kcrtx\" (UID: \"b14bc956-d554-4fef-be24-28d68be49afe\") " pod="openstack/root-account-create-update-kcrtx" Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.038744 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5wm2t"] Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.106562 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b14bc956-d554-4fef-be24-28d68be49afe-operator-scripts\") pod \"root-account-create-update-kcrtx\" (UID: \"b14bc956-d554-4fef-be24-28d68be49afe\") " pod="openstack/root-account-create-update-kcrtx" Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.106731 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzgl7\" (UniqueName: \"kubernetes.io/projected/b14bc956-d554-4fef-be24-28d68be49afe-kube-api-access-wzgl7\") pod \"root-account-create-update-kcrtx\" (UID: \"b14bc956-d554-4fef-be24-28d68be49afe\") " pod="openstack/root-account-create-update-kcrtx" Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.108152 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b14bc956-d554-4fef-be24-28d68be49afe-operator-scripts\") pod \"root-account-create-update-kcrtx\" (UID: \"b14bc956-d554-4fef-be24-28d68be49afe\") " pod="openstack/root-account-create-update-kcrtx" Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.155160 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzgl7\" (UniqueName: \"kubernetes.io/projected/b14bc956-d554-4fef-be24-28d68be49afe-kube-api-access-wzgl7\") pod \"root-account-create-update-kcrtx\" (UID: \"b14bc956-d554-4fef-be24-28d68be49afe\") " pod="openstack/root-account-create-update-kcrtx" Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.310312 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kcrtx" Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.585363 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5wm2t" event={"ID":"89e4d258-a008-4a05-ae27-1d5c03654aa2","Type":"ContainerStarted","Data":"df013354af894a417e9f250dc5595a3065a6b2ff5cb0d60b3758d7c064a2d581"} Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.715667 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d158-account-create-update-nfbsk"] Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.750709 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb64d23-23e5-43c0-a738-50cd17f6f03f" path="/var/lib/kubelet/pods/fdb64d23-23e5-43c0-a738-50cd17f6f03f/volumes" Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.751361 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d299-account-create-update-tm2k8"] Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.793775 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-206e-account-create-update-jlllb"] Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.874469 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nrzcw"] Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.885662 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-67f4-account-create-update-xwdx5"] Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.903351 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7gkfn"] Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.909795 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bwv6k"] Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.940496 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ng4j5"] Jan 20 04:05:57 crc kubenswrapper[4898]: I0120 04:05:57.982781 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kcrtx"] Jan 20 04:05:58 crc kubenswrapper[4898]: W0120 04:05:58.011094 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10755ec2_23ba_4fea_852a_546e494f98df.slice/crio-1f86389dda4d9f073fb5d7393d0c958c88fae94d9404721d9476744ca5ec350c WatchSource:0}: Error finding container 1f86389dda4d9f073fb5d7393d0c958c88fae94d9404721d9476744ca5ec350c: Status 404 returned error can't find the container with id 1f86389dda4d9f073fb5d7393d0c958c88fae94d9404721d9476744ca5ec350c Jan 20 04:05:58 crc kubenswrapper[4898]: W0120 04:05:58.013059 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a8a45e8_91bc_40b8_9a92_b8f82709a03a.slice/crio-ed902f97dcb3b43533446ec46fd5b3db54ed288c69a497652ab5024dbdf9902d WatchSource:0}: Error finding container ed902f97dcb3b43533446ec46fd5b3db54ed288c69a497652ab5024dbdf9902d: Status 404 returned error can't find the container with id ed902f97dcb3b43533446ec46fd5b3db54ed288c69a497652ab5024dbdf9902d Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.402351 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tq8nx"] Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.404501 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.416672 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq8nx"] Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.551324 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-utilities\") pod \"certified-operators-tq8nx\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.551652 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgfmn\" (UniqueName: \"kubernetes.io/projected/c4018f65-d2f8-4e71-a417-1310469128c6-kube-api-access-mgfmn\") pod \"certified-operators-tq8nx\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.551700 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-catalog-content\") pod \"certified-operators-tq8nx\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.611778 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-67f4-account-create-update-xwdx5" event={"ID":"8e3e17d9-6103-4600-8159-178bcefd2c84","Type":"ContainerStarted","Data":"9377ed6020812d5ed6032abfe684d20e97912491cb5ed0d1642fd7888760b314"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.611834 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-67f4-account-create-update-xwdx5" event={"ID":"8e3e17d9-6103-4600-8159-178bcefd2c84","Type":"ContainerStarted","Data":"c621927c55a331bff66a9d30986cf3fbf8cc3812d3747df024674e907936b23d"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.627347 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d299-account-create-update-tm2k8" event={"ID":"67c91d56-5dc8-4607-aad0-85214357b977","Type":"ContainerStarted","Data":"d887b73bbbbadd3085f1373f61577c1a9c4f54ebf9313d9ca00538a42491ec36"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.627382 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d299-account-create-update-tm2k8" event={"ID":"67c91d56-5dc8-4607-aad0-85214357b977","Type":"ContainerStarted","Data":"1a2b665203a1a2210f4e6e7f5af66c33a6bef9d60704351450b69b82ed30a655"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.633580 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-67f4-account-create-update-xwdx5" podStartSLOduration=3.633554297 podStartE2EDuration="3.633554297s" podCreationTimestamp="2026-01-20 04:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:58.632454702 +0000 UTC m=+1005.232242571" watchObservedRunningTime="2026-01-20 04:05:58.633554297 +0000 UTC m=+1005.233342166" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.642417 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d158-account-create-update-nfbsk" event={"ID":"edb0f751-8705-49a0-9d9a-67633e2f0379","Type":"ContainerStarted","Data":"016bacf2a366a084523ddf98ed44b9d47203df2ca7040087ea4dd53244751c49"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.642474 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d158-account-create-update-nfbsk" event={"ID":"edb0f751-8705-49a0-9d9a-67633e2f0379","Type":"ContainerStarted","Data":"ae26c43bde31c3be3970814a3623022e1cbf22aa770e9f765e6e2882c4937a79"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.652907 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgfmn\" (UniqueName: \"kubernetes.io/projected/c4018f65-d2f8-4e71-a417-1310469128c6-kube-api-access-mgfmn\") pod \"certified-operators-tq8nx\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.652967 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-catalog-content\") pod \"certified-operators-tq8nx\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.653023 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-utilities\") pod \"certified-operators-tq8nx\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.653532 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-utilities\") pod \"certified-operators-tq8nx\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.655511 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-catalog-content\") pod \"certified-operators-tq8nx\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.656562 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ng4j5" event={"ID":"2a8a45e8-91bc-40b8-9a92-b8f82709a03a","Type":"ContainerStarted","Data":"8f7586655ce567e812884d71d4f168718aef199339faefacfec57d292b198a0f"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.656607 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ng4j5" event={"ID":"2a8a45e8-91bc-40b8-9a92-b8f82709a03a","Type":"ContainerStarted","Data":"ed902f97dcb3b43533446ec46fd5b3db54ed288c69a497652ab5024dbdf9902d"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.662111 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d299-account-create-update-tm2k8" podStartSLOduration=2.662086154 podStartE2EDuration="2.662086154s" podCreationTimestamp="2026-01-20 04:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:58.647828576 +0000 UTC m=+1005.247616435" watchObservedRunningTime="2026-01-20 04:05:58.662086154 +0000 UTC m=+1005.261874013" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.678918 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bwv6k" event={"ID":"2e099701-df46-48de-883e-65d209f81af0","Type":"ContainerStarted","Data":"bf93cbc90dd158b1cdd9f5e226b81b0af0cac3edb57ed078d9d788cb53dc17ac"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.678965 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bwv6k" event={"ID":"2e099701-df46-48de-883e-65d209f81af0","Type":"ContainerStarted","Data":"466c51f48109a295358f01960c7e7043daa02edaf487ae4fff5849ed9f8a4122"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.690053 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kcrtx" event={"ID":"b14bc956-d554-4fef-be24-28d68be49afe","Type":"ContainerStarted","Data":"1f0f1bd77a5f23b4f28bbfdd8332ceb61a972afc1f5a34d9edaf5b7c8380dd2b"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.690552 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kcrtx" event={"ID":"b14bc956-d554-4fef-be24-28d68be49afe","Type":"ContainerStarted","Data":"33624472bb253f5e0ce443b28a33877eafe844a48a736772037b74b8ccbbc9dd"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.702113 4898 generic.go:334] "Generic (PLEG): container finished" podID="393c968e-aaea-4b5f-86ba-44ffa221b98e" containerID="a585b3b807ff6eba355744114e6d293776cdc0077a2f03c80a1f77065a7d1812" exitCode=0 Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.702204 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" event={"ID":"393c968e-aaea-4b5f-86ba-44ffa221b98e","Type":"ContainerDied","Data":"a585b3b807ff6eba355744114e6d293776cdc0077a2f03c80a1f77065a7d1812"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.708743 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7gkfn" event={"ID":"10755ec2-23ba-4fea-852a-546e494f98df","Type":"ContainerStarted","Data":"1f86389dda4d9f073fb5d7393d0c958c88fae94d9404721d9476744ca5ec350c"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.715658 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d158-account-create-update-nfbsk" podStartSLOduration=3.715638439 podStartE2EDuration="3.715638439s" podCreationTimestamp="2026-01-20 04:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:58.669347092 +0000 UTC m=+1005.269134951" watchObservedRunningTime="2026-01-20 04:05:58.715638439 +0000 UTC m=+1005.315426298" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.719208 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nrzcw" event={"ID":"8b590651-4b56-4cf7-8374-c8fe0c8b26e5","Type":"ContainerStarted","Data":"9350047952c9a49fcf8e732ba0dc0ed170f1015a176064259a7eeb59aafe6d28"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.719260 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nrzcw" event={"ID":"8b590651-4b56-4cf7-8374-c8fe0c8b26e5","Type":"ContainerStarted","Data":"716a91aedb83d0a6a92866c05f9c8a5a339861d15608d428cf4f722cea65a688"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.719292 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgfmn\" (UniqueName: \"kubernetes.io/projected/c4018f65-d2f8-4e71-a417-1310469128c6-kube-api-access-mgfmn\") pod \"certified-operators-tq8nx\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.727350 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-206e-account-create-update-jlllb" event={"ID":"710b53e5-5753-4b12-b02e-516fc4b2ed8f","Type":"ContainerStarted","Data":"58cd567b840d4c90483f01188ff97e82e469ea8c7de68c7e12279fed88135fbf"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.727565 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-206e-account-create-update-jlllb" event={"ID":"710b53e5-5753-4b12-b02e-516fc4b2ed8f","Type":"ContainerStarted","Data":"762fd9113b2004c98b4f92f429d5b419ae7b43a7c293694f7a39ab4feab75352"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.732210 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5wm2t" event={"ID":"89e4d258-a008-4a05-ae27-1d5c03654aa2","Type":"ContainerStarted","Data":"2ca51ff5a7de11da0315f81ff2e6d99d146784adfef32640a1f3dcb9f2ad143e"} Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.737845 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.742123 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-ng4j5" podStartSLOduration=3.74209709 podStartE2EDuration="3.74209709s" podCreationTimestamp="2026-01-20 04:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:58.693591375 +0000 UTC m=+1005.293379244" watchObservedRunningTime="2026-01-20 04:05:58.74209709 +0000 UTC m=+1005.341884949" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.789708 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-kcrtx" podStartSLOduration=2.7896813270000003 podStartE2EDuration="2.789681327s" podCreationTimestamp="2026-01-20 04:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:58.716175775 +0000 UTC m=+1005.315963634" watchObservedRunningTime="2026-01-20 04:05:58.789681327 +0000 UTC m=+1005.389469186" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.810112 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-bwv6k" podStartSLOduration=2.810093879 podStartE2EDuration="2.810093879s" podCreationTimestamp="2026-01-20 04:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:58.736820764 +0000 UTC m=+1005.336608643" watchObservedRunningTime="2026-01-20 04:05:58.810093879 +0000 UTC m=+1005.409881738" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.814641 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-nrzcw" podStartSLOduration=3.814625081 podStartE2EDuration="3.814625081s" podCreationTimestamp="2026-01-20 04:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:58.765756605 +0000 UTC m=+1005.365544464" watchObservedRunningTime="2026-01-20 04:05:58.814625081 +0000 UTC m=+1005.414412930" Jan 20 04:05:58 crc kubenswrapper[4898]: I0120 04:05:58.828022 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-206e-account-create-update-jlllb" podStartSLOduration=3.8280045830000002 podStartE2EDuration="3.828004583s" podCreationTimestamp="2026-01-20 04:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:58.783521423 +0000 UTC m=+1005.383309282" watchObservedRunningTime="2026-01-20 04:05:58.828004583 +0000 UTC m=+1005.427792442" Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.267935 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq8nx"] Jan 20 04:05:59 crc kubenswrapper[4898]: W0120 04:05:59.312897 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4018f65_d2f8_4e71_a417_1310469128c6.slice/crio-dfb4a0c3c5c0f785e8903ff642420bc5c48e82099790716ede21938d1f1fcf4a WatchSource:0}: Error finding container dfb4a0c3c5c0f785e8903ff642420bc5c48e82099790716ede21938d1f1fcf4a: Status 404 returned error can't find the container with id dfb4a0c3c5c0f785e8903ff642420bc5c48e82099790716ede21938d1f1fcf4a Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.742550 4898 generic.go:334] "Generic (PLEG): container finished" podID="8e3e17d9-6103-4600-8159-178bcefd2c84" containerID="9377ed6020812d5ed6032abfe684d20e97912491cb5ed0d1642fd7888760b314" exitCode=0 Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.742597 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-67f4-account-create-update-xwdx5" event={"ID":"8e3e17d9-6103-4600-8159-178bcefd2c84","Type":"ContainerDied","Data":"9377ed6020812d5ed6032abfe684d20e97912491cb5ed0d1642fd7888760b314"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.748110 4898 generic.go:334] "Generic (PLEG): container finished" podID="89e4d258-a008-4a05-ae27-1d5c03654aa2" containerID="2ca51ff5a7de11da0315f81ff2e6d99d146784adfef32640a1f3dcb9f2ad143e" exitCode=0 Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.748215 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5wm2t" event={"ID":"89e4d258-a008-4a05-ae27-1d5c03654aa2","Type":"ContainerDied","Data":"2ca51ff5a7de11da0315f81ff2e6d99d146784adfef32640a1f3dcb9f2ad143e"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.758210 4898 generic.go:334] "Generic (PLEG): container finished" podID="c4018f65-d2f8-4e71-a417-1310469128c6" containerID="c7f0062b1a548f88891f71f53586fc908780348ecf554f828ba040b2997da8f5" exitCode=0 Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.758745 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq8nx" event={"ID":"c4018f65-d2f8-4e71-a417-1310469128c6","Type":"ContainerDied","Data":"c7f0062b1a548f88891f71f53586fc908780348ecf554f828ba040b2997da8f5"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.758799 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq8nx" event={"ID":"c4018f65-d2f8-4e71-a417-1310469128c6","Type":"ContainerStarted","Data":"dfb4a0c3c5c0f785e8903ff642420bc5c48e82099790716ede21938d1f1fcf4a"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.763356 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" event={"ID":"393c968e-aaea-4b5f-86ba-44ffa221b98e","Type":"ContainerStarted","Data":"1b813b2d842ece85416dbd0c831e6e8ddc1cc59432f832b3735b9790d5eaf388"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.763471 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.765105 4898 generic.go:334] "Generic (PLEG): container finished" podID="edb0f751-8705-49a0-9d9a-67633e2f0379" containerID="016bacf2a366a084523ddf98ed44b9d47203df2ca7040087ea4dd53244751c49" exitCode=0 Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.765171 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d158-account-create-update-nfbsk" event={"ID":"edb0f751-8705-49a0-9d9a-67633e2f0379","Type":"ContainerDied","Data":"016bacf2a366a084523ddf98ed44b9d47203df2ca7040087ea4dd53244751c49"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.766504 4898 generic.go:334] "Generic (PLEG): container finished" podID="2e099701-df46-48de-883e-65d209f81af0" containerID="bf93cbc90dd158b1cdd9f5e226b81b0af0cac3edb57ed078d9d788cb53dc17ac" exitCode=0 Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.766547 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bwv6k" event={"ID":"2e099701-df46-48de-883e-65d209f81af0","Type":"ContainerDied","Data":"bf93cbc90dd158b1cdd9f5e226b81b0af0cac3edb57ed078d9d788cb53dc17ac"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.769244 4898 generic.go:334] "Generic (PLEG): container finished" podID="b14bc956-d554-4fef-be24-28d68be49afe" containerID="1f0f1bd77a5f23b4f28bbfdd8332ceb61a972afc1f5a34d9edaf5b7c8380dd2b" exitCode=0 Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.769296 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kcrtx" event={"ID":"b14bc956-d554-4fef-be24-28d68be49afe","Type":"ContainerDied","Data":"1f0f1bd77a5f23b4f28bbfdd8332ceb61a972afc1f5a34d9edaf5b7c8380dd2b"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.771113 4898 generic.go:334] "Generic (PLEG): container finished" podID="710b53e5-5753-4b12-b02e-516fc4b2ed8f" containerID="58cd567b840d4c90483f01188ff97e82e469ea8c7de68c7e12279fed88135fbf" exitCode=0 Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.771143 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-206e-account-create-update-jlllb" event={"ID":"710b53e5-5753-4b12-b02e-516fc4b2ed8f","Type":"ContainerDied","Data":"58cd567b840d4c90483f01188ff97e82e469ea8c7de68c7e12279fed88135fbf"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.773898 4898 generic.go:334] "Generic (PLEG): container finished" podID="67c91d56-5dc8-4607-aad0-85214357b977" containerID="d887b73bbbbadd3085f1373f61577c1a9c4f54ebf9313d9ca00538a42491ec36" exitCode=0 Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.773929 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d299-account-create-update-tm2k8" event={"ID":"67c91d56-5dc8-4607-aad0-85214357b977","Type":"ContainerDied","Data":"d887b73bbbbadd3085f1373f61577c1a9c4f54ebf9313d9ca00538a42491ec36"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.779767 4898 generic.go:334] "Generic (PLEG): container finished" podID="2a8a45e8-91bc-40b8-9a92-b8f82709a03a" containerID="8f7586655ce567e812884d71d4f168718aef199339faefacfec57d292b198a0f" exitCode=0 Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.779848 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ng4j5" event={"ID":"2a8a45e8-91bc-40b8-9a92-b8f82709a03a","Type":"ContainerDied","Data":"8f7586655ce567e812884d71d4f168718aef199339faefacfec57d292b198a0f"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.787513 4898 generic.go:334] "Generic (PLEG): container finished" podID="8b590651-4b56-4cf7-8374-c8fe0c8b26e5" containerID="9350047952c9a49fcf8e732ba0dc0ed170f1015a176064259a7eeb59aafe6d28" exitCode=0 Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.787556 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nrzcw" event={"ID":"8b590651-4b56-4cf7-8374-c8fe0c8b26e5","Type":"ContainerDied","Data":"9350047952c9a49fcf8e732ba0dc0ed170f1015a176064259a7eeb59aafe6d28"} Jan 20 04:05:59 crc kubenswrapper[4898]: I0120 04:05:59.873273 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" podStartSLOduration=5.873251536 podStartE2EDuration="5.873251536s" podCreationTimestamp="2026-01-20 04:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:05:59.870984915 +0000 UTC m=+1006.470772794" watchObservedRunningTime="2026-01-20 04:05:59.873251536 +0000 UTC m=+1006.473039405" Jan 20 04:06:00 crc kubenswrapper[4898]: I0120 04:06:00.114532 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5wm2t" Jan 20 04:06:00 crc kubenswrapper[4898]: I0120 04:06:00.206981 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhgl2\" (UniqueName: \"kubernetes.io/projected/89e4d258-a008-4a05-ae27-1d5c03654aa2-kube-api-access-jhgl2\") pod \"89e4d258-a008-4a05-ae27-1d5c03654aa2\" (UID: \"89e4d258-a008-4a05-ae27-1d5c03654aa2\") " Jan 20 04:06:00 crc kubenswrapper[4898]: I0120 04:06:00.207090 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89e4d258-a008-4a05-ae27-1d5c03654aa2-operator-scripts\") pod \"89e4d258-a008-4a05-ae27-1d5c03654aa2\" (UID: \"89e4d258-a008-4a05-ae27-1d5c03654aa2\") " Jan 20 04:06:00 crc kubenswrapper[4898]: I0120 04:06:00.207563 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89e4d258-a008-4a05-ae27-1d5c03654aa2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89e4d258-a008-4a05-ae27-1d5c03654aa2" (UID: "89e4d258-a008-4a05-ae27-1d5c03654aa2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:00 crc kubenswrapper[4898]: I0120 04:06:00.226072 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e4d258-a008-4a05-ae27-1d5c03654aa2-kube-api-access-jhgl2" (OuterVolumeSpecName: "kube-api-access-jhgl2") pod "89e4d258-a008-4a05-ae27-1d5c03654aa2" (UID: "89e4d258-a008-4a05-ae27-1d5c03654aa2"). InnerVolumeSpecName "kube-api-access-jhgl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:00 crc kubenswrapper[4898]: I0120 04:06:00.309110 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhgl2\" (UniqueName: \"kubernetes.io/projected/89e4d258-a008-4a05-ae27-1d5c03654aa2-kube-api-access-jhgl2\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:00 crc kubenswrapper[4898]: I0120 04:06:00.309137 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89e4d258-a008-4a05-ae27-1d5c03654aa2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:00 crc kubenswrapper[4898]: I0120 04:06:00.809274 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5wm2t" event={"ID":"89e4d258-a008-4a05-ae27-1d5c03654aa2","Type":"ContainerDied","Data":"df013354af894a417e9f250dc5595a3065a6b2ff5cb0d60b3758d7c064a2d581"} Jan 20 04:06:00 crc kubenswrapper[4898]: I0120 04:06:00.809314 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df013354af894a417e9f250dc5595a3065a6b2ff5cb0d60b3758d7c064a2d581" Jan 20 04:06:00 crc kubenswrapper[4898]: I0120 04:06:00.809930 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5wm2t" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.686228 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ng4j5" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.698111 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nrzcw" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.710926 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bwv6k" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.720497 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d299-account-create-update-tm2k8" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.753761 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-67f4-account-create-update-xwdx5" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.761121 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d158-account-create-update-nfbsk" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.779108 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzqrw\" (UniqueName: \"kubernetes.io/projected/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-kube-api-access-rzqrw\") pod \"8b590651-4b56-4cf7-8374-c8fe0c8b26e5\" (UID: \"8b590651-4b56-4cf7-8374-c8fe0c8b26e5\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.779413 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt9qf\" (UniqueName: \"kubernetes.io/projected/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-kube-api-access-qt9qf\") pod \"2a8a45e8-91bc-40b8-9a92-b8f82709a03a\" (UID: \"2a8a45e8-91bc-40b8-9a92-b8f82709a03a\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.779534 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-operator-scripts\") pod \"2a8a45e8-91bc-40b8-9a92-b8f82709a03a\" (UID: \"2a8a45e8-91bc-40b8-9a92-b8f82709a03a\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.779580 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-operator-scripts\") pod \"8b590651-4b56-4cf7-8374-c8fe0c8b26e5\" (UID: \"8b590651-4b56-4cf7-8374-c8fe0c8b26e5\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.780732 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b590651-4b56-4cf7-8374-c8fe0c8b26e5" (UID: "8b590651-4b56-4cf7-8374-c8fe0c8b26e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.781152 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a8a45e8-91bc-40b8-9a92-b8f82709a03a" (UID: "2a8a45e8-91bc-40b8-9a92-b8f82709a03a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.783776 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kcrtx" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.789664 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-kube-api-access-qt9qf" (OuterVolumeSpecName: "kube-api-access-qt9qf") pod "2a8a45e8-91bc-40b8-9a92-b8f82709a03a" (UID: "2a8a45e8-91bc-40b8-9a92-b8f82709a03a"). InnerVolumeSpecName "kube-api-access-qt9qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.789983 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-kube-api-access-rzqrw" (OuterVolumeSpecName: "kube-api-access-rzqrw") pod "8b590651-4b56-4cf7-8374-c8fe0c8b26e5" (UID: "8b590651-4b56-4cf7-8374-c8fe0c8b26e5"). InnerVolumeSpecName "kube-api-access-rzqrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.791731 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-206e-account-create-update-jlllb" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.837177 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d299-account-create-update-tm2k8" event={"ID":"67c91d56-5dc8-4607-aad0-85214357b977","Type":"ContainerDied","Data":"1a2b665203a1a2210f4e6e7f5af66c33a6bef9d60704351450b69b82ed30a655"} Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.837223 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2b665203a1a2210f4e6e7f5af66c33a6bef9d60704351450b69b82ed30a655" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.837273 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d299-account-create-update-tm2k8" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.838823 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ng4j5" event={"ID":"2a8a45e8-91bc-40b8-9a92-b8f82709a03a","Type":"ContainerDied","Data":"ed902f97dcb3b43533446ec46fd5b3db54ed288c69a497652ab5024dbdf9902d"} Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.838844 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed902f97dcb3b43533446ec46fd5b3db54ed288c69a497652ab5024dbdf9902d" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.838984 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ng4j5" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.842507 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nrzcw" event={"ID":"8b590651-4b56-4cf7-8374-c8fe0c8b26e5","Type":"ContainerDied","Data":"716a91aedb83d0a6a92866c05f9c8a5a339861d15608d428cf4f722cea65a688"} Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.842553 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="716a91aedb83d0a6a92866c05f9c8a5a339861d15608d428cf4f722cea65a688" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.842607 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nrzcw" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.846395 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kcrtx" event={"ID":"b14bc956-d554-4fef-be24-28d68be49afe","Type":"ContainerDied","Data":"33624472bb253f5e0ce443b28a33877eafe844a48a736772037b74b8ccbbc9dd"} Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.846853 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33624472bb253f5e0ce443b28a33877eafe844a48a736772037b74b8ccbbc9dd" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.846751 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kcrtx" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.858039 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-67f4-account-create-update-xwdx5" event={"ID":"8e3e17d9-6103-4600-8159-178bcefd2c84","Type":"ContainerDied","Data":"c621927c55a331bff66a9d30986cf3fbf8cc3812d3747df024674e907936b23d"} Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.858088 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c621927c55a331bff66a9d30986cf3fbf8cc3812d3747df024674e907936b23d" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.858174 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-67f4-account-create-update-xwdx5" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.863852 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq8nx" event={"ID":"c4018f65-d2f8-4e71-a417-1310469128c6","Type":"ContainerStarted","Data":"0b6d1f3e7ba31f075c1eb1ea3443fa265601df6a008177990660e40afe4e199b"} Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.866611 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7gkfn" event={"ID":"10755ec2-23ba-4fea-852a-546e494f98df","Type":"ContainerStarted","Data":"dc767f467ad644b7c118469ebba23adc2d68db5c0aebe2cffff58b3db28b8683"} Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.870390 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d158-account-create-update-nfbsk" event={"ID":"edb0f751-8705-49a0-9d9a-67633e2f0379","Type":"ContainerDied","Data":"ae26c43bde31c3be3970814a3623022e1cbf22aa770e9f765e6e2882c4937a79"} Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.870447 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae26c43bde31c3be3970814a3623022e1cbf22aa770e9f765e6e2882c4937a79" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.870610 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d158-account-create-update-nfbsk" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.871869 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bwv6k" event={"ID":"2e099701-df46-48de-883e-65d209f81af0","Type":"ContainerDied","Data":"466c51f48109a295358f01960c7e7043daa02edaf487ae4fff5849ed9f8a4122"} Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.871902 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="466c51f48109a295358f01960c7e7043daa02edaf487ae4fff5849ed9f8a4122" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.871986 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bwv6k" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.873602 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-206e-account-create-update-jlllb" event={"ID":"710b53e5-5753-4b12-b02e-516fc4b2ed8f","Type":"ContainerDied","Data":"762fd9113b2004c98b4f92f429d5b419ae7b43a7c293694f7a39ab4feab75352"} Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.873638 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="762fd9113b2004c98b4f92f429d5b419ae7b43a7c293694f7a39ab4feab75352" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.873729 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-206e-account-create-update-jlllb" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.882913 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r559d\" (UniqueName: \"kubernetes.io/projected/710b53e5-5753-4b12-b02e-516fc4b2ed8f-kube-api-access-r559d\") pod \"710b53e5-5753-4b12-b02e-516fc4b2ed8f\" (UID: \"710b53e5-5753-4b12-b02e-516fc4b2ed8f\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.883033 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcpd9\" (UniqueName: \"kubernetes.io/projected/8e3e17d9-6103-4600-8159-178bcefd2c84-kube-api-access-bcpd9\") pod \"8e3e17d9-6103-4600-8159-178bcefd2c84\" (UID: \"8e3e17d9-6103-4600-8159-178bcefd2c84\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.883088 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b7qg\" (UniqueName: \"kubernetes.io/projected/67c91d56-5dc8-4607-aad0-85214357b977-kube-api-access-4b7qg\") pod \"67c91d56-5dc8-4607-aad0-85214357b977\" (UID: \"67c91d56-5dc8-4607-aad0-85214357b977\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.883122 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb0f751-8705-49a0-9d9a-67633e2f0379-operator-scripts\") pod \"edb0f751-8705-49a0-9d9a-67633e2f0379\" (UID: \"edb0f751-8705-49a0-9d9a-67633e2f0379\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.883149 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e3e17d9-6103-4600-8159-178bcefd2c84-operator-scripts\") pod \"8e3e17d9-6103-4600-8159-178bcefd2c84\" (UID: \"8e3e17d9-6103-4600-8159-178bcefd2c84\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.883195 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68w4j\" (UniqueName: \"kubernetes.io/projected/edb0f751-8705-49a0-9d9a-67633e2f0379-kube-api-access-68w4j\") pod \"edb0f751-8705-49a0-9d9a-67633e2f0379\" (UID: \"edb0f751-8705-49a0-9d9a-67633e2f0379\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.883239 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e099701-df46-48de-883e-65d209f81af0-operator-scripts\") pod \"2e099701-df46-48de-883e-65d209f81af0\" (UID: \"2e099701-df46-48de-883e-65d209f81af0\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.883285 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/710b53e5-5753-4b12-b02e-516fc4b2ed8f-operator-scripts\") pod \"710b53e5-5753-4b12-b02e-516fc4b2ed8f\" (UID: \"710b53e5-5753-4b12-b02e-516fc4b2ed8f\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.883330 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c91d56-5dc8-4607-aad0-85214357b977-operator-scripts\") pod \"67c91d56-5dc8-4607-aad0-85214357b977\" (UID: \"67c91d56-5dc8-4607-aad0-85214357b977\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.883367 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b14bc956-d554-4fef-be24-28d68be49afe-operator-scripts\") pod \"b14bc956-d554-4fef-be24-28d68be49afe\" (UID: \"b14bc956-d554-4fef-be24-28d68be49afe\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.883396 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzgl7\" (UniqueName: \"kubernetes.io/projected/b14bc956-d554-4fef-be24-28d68be49afe-kube-api-access-wzgl7\") pod \"b14bc956-d554-4fef-be24-28d68be49afe\" (UID: \"b14bc956-d554-4fef-be24-28d68be49afe\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.883422 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6dkx\" (UniqueName: \"kubernetes.io/projected/2e099701-df46-48de-883e-65d209f81af0-kube-api-access-j6dkx\") pod \"2e099701-df46-48de-883e-65d209f81af0\" (UID: \"2e099701-df46-48de-883e-65d209f81af0\") " Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.884077 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzqrw\" (UniqueName: \"kubernetes.io/projected/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-kube-api-access-rzqrw\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.884093 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt9qf\" (UniqueName: \"kubernetes.io/projected/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-kube-api-access-qt9qf\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.884105 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a8a45e8-91bc-40b8-9a92-b8f82709a03a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.884118 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b590651-4b56-4cf7-8374-c8fe0c8b26e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.884584 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b14bc956-d554-4fef-be24-28d68be49afe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b14bc956-d554-4fef-be24-28d68be49afe" (UID: "b14bc956-d554-4fef-be24-28d68be49afe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.885585 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710b53e5-5753-4b12-b02e-516fc4b2ed8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "710b53e5-5753-4b12-b02e-516fc4b2ed8f" (UID: "710b53e5-5753-4b12-b02e-516fc4b2ed8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.885798 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edb0f751-8705-49a0-9d9a-67633e2f0379-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "edb0f751-8705-49a0-9d9a-67633e2f0379" (UID: "edb0f751-8705-49a0-9d9a-67633e2f0379"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.886028 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/710b53e5-5753-4b12-b02e-516fc4b2ed8f-kube-api-access-r559d" (OuterVolumeSpecName: "kube-api-access-r559d") pod "710b53e5-5753-4b12-b02e-516fc4b2ed8f" (UID: "710b53e5-5753-4b12-b02e-516fc4b2ed8f"). InnerVolumeSpecName "kube-api-access-r559d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.886633 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e099701-df46-48de-883e-65d209f81af0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e099701-df46-48de-883e-65d209f81af0" (UID: "2e099701-df46-48de-883e-65d209f81af0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.886399 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c91d56-5dc8-4607-aad0-85214357b977-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67c91d56-5dc8-4607-aad0-85214357b977" (UID: "67c91d56-5dc8-4607-aad0-85214357b977"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.886708 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3e17d9-6103-4600-8159-178bcefd2c84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e3e17d9-6103-4600-8159-178bcefd2c84" (UID: "8e3e17d9-6103-4600-8159-178bcefd2c84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.887815 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb0f751-8705-49a0-9d9a-67633e2f0379-kube-api-access-68w4j" (OuterVolumeSpecName: "kube-api-access-68w4j") pod "edb0f751-8705-49a0-9d9a-67633e2f0379" (UID: "edb0f751-8705-49a0-9d9a-67633e2f0379"). InnerVolumeSpecName "kube-api-access-68w4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.888907 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e3e17d9-6103-4600-8159-178bcefd2c84-kube-api-access-bcpd9" (OuterVolumeSpecName: "kube-api-access-bcpd9") pod "8e3e17d9-6103-4600-8159-178bcefd2c84" (UID: "8e3e17d9-6103-4600-8159-178bcefd2c84"). InnerVolumeSpecName "kube-api-access-bcpd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.901964 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c91d56-5dc8-4607-aad0-85214357b977-kube-api-access-4b7qg" (OuterVolumeSpecName: "kube-api-access-4b7qg") pod "67c91d56-5dc8-4607-aad0-85214357b977" (UID: "67c91d56-5dc8-4607-aad0-85214357b977"). InnerVolumeSpecName "kube-api-access-4b7qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.921187 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e099701-df46-48de-883e-65d209f81af0-kube-api-access-j6dkx" (OuterVolumeSpecName: "kube-api-access-j6dkx") pod "2e099701-df46-48de-883e-65d209f81af0" (UID: "2e099701-df46-48de-883e-65d209f81af0"). InnerVolumeSpecName "kube-api-access-j6dkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.921303 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14bc956-d554-4fef-be24-28d68be49afe-kube-api-access-wzgl7" (OuterVolumeSpecName: "kube-api-access-wzgl7") pod "b14bc956-d554-4fef-be24-28d68be49afe" (UID: "b14bc956-d554-4fef-be24-28d68be49afe"). InnerVolumeSpecName "kube-api-access-wzgl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.986925 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b7qg\" (UniqueName: \"kubernetes.io/projected/67c91d56-5dc8-4607-aad0-85214357b977-kube-api-access-4b7qg\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.986956 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb0f751-8705-49a0-9d9a-67633e2f0379-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.986965 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e3e17d9-6103-4600-8159-178bcefd2c84-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.986974 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68w4j\" (UniqueName: \"kubernetes.io/projected/edb0f751-8705-49a0-9d9a-67633e2f0379-kube-api-access-68w4j\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.986983 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e099701-df46-48de-883e-65d209f81af0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.986991 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/710b53e5-5753-4b12-b02e-516fc4b2ed8f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.987001 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c91d56-5dc8-4607-aad0-85214357b977-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.987010 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b14bc956-d554-4fef-be24-28d68be49afe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.987018 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzgl7\" (UniqueName: \"kubernetes.io/projected/b14bc956-d554-4fef-be24-28d68be49afe-kube-api-access-wzgl7\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.987027 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6dkx\" (UniqueName: \"kubernetes.io/projected/2e099701-df46-48de-883e-65d209f81af0-kube-api-access-j6dkx\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.987035 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r559d\" (UniqueName: \"kubernetes.io/projected/710b53e5-5753-4b12-b02e-516fc4b2ed8f-kube-api-access-r559d\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.987043 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcpd9\" (UniqueName: \"kubernetes.io/projected/8e3e17d9-6103-4600-8159-178bcefd2c84-kube-api-access-bcpd9\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:03 crc kubenswrapper[4898]: I0120 04:06:03.991985 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-7gkfn" podStartSLOduration=3.453216105 podStartE2EDuration="8.991958324s" podCreationTimestamp="2026-01-20 04:05:55 +0000 UTC" firstStartedPulling="2026-01-20 04:05:58.01300759 +0000 UTC m=+1004.612795449" lastFinishedPulling="2026-01-20 04:06:03.551749789 +0000 UTC m=+1010.151537668" observedRunningTime="2026-01-20 04:06:03.970750998 +0000 UTC m=+1010.570538867" watchObservedRunningTime="2026-01-20 04:06:03.991958324 +0000 UTC m=+1010.591746183" Jan 20 04:06:04 crc kubenswrapper[4898]: I0120 04:06:04.885857 4898 generic.go:334] "Generic (PLEG): container finished" podID="c4018f65-d2f8-4e71-a417-1310469128c6" containerID="0b6d1f3e7ba31f075c1eb1ea3443fa265601df6a008177990660e40afe4e199b" exitCode=0 Jan 20 04:06:04 crc kubenswrapper[4898]: I0120 04:06:04.885936 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq8nx" event={"ID":"c4018f65-d2f8-4e71-a417-1310469128c6","Type":"ContainerDied","Data":"0b6d1f3e7ba31f075c1eb1ea3443fa265601df6a008177990660e40afe4e199b"} Jan 20 04:06:05 crc kubenswrapper[4898]: I0120 04:06:05.316662 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:06:05 crc kubenswrapper[4898]: I0120 04:06:05.393236 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qp8ts"] Jan 20 04:06:05 crc kubenswrapper[4898]: I0120 04:06:05.393651 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" podUID="457b496f-9f17-476b-bf51-30f948f83afb" containerName="dnsmasq-dns" containerID="cri-o://cb4199074583a7af2dbac646317eafaf23b3d964fb51addad2e8b286de7d3d3d" gracePeriod=10 Jan 20 04:06:05 crc kubenswrapper[4898]: I0120 04:06:05.910726 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq8nx" event={"ID":"c4018f65-d2f8-4e71-a417-1310469128c6","Type":"ContainerStarted","Data":"2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155"} Jan 20 04:06:05 crc kubenswrapper[4898]: I0120 04:06:05.913508 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:06:05 crc kubenswrapper[4898]: I0120 04:06:05.913978 4898 generic.go:334] "Generic (PLEG): container finished" podID="457b496f-9f17-476b-bf51-30f948f83afb" containerID="cb4199074583a7af2dbac646317eafaf23b3d964fb51addad2e8b286de7d3d3d" exitCode=0 Jan 20 04:06:05 crc kubenswrapper[4898]: I0120 04:06:05.914024 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" event={"ID":"457b496f-9f17-476b-bf51-30f948f83afb","Type":"ContainerDied","Data":"cb4199074583a7af2dbac646317eafaf23b3d964fb51addad2e8b286de7d3d3d"} Jan 20 04:06:05 crc kubenswrapper[4898]: I0120 04:06:05.914066 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" event={"ID":"457b496f-9f17-476b-bf51-30f948f83afb","Type":"ContainerDied","Data":"7d7bb4811d05ef5850d21e32ee50f1584ef59083aa5becd774bf8ef35f3a6067"} Jan 20 04:06:05 crc kubenswrapper[4898]: I0120 04:06:05.914081 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d7bb4811d05ef5850d21e32ee50f1584ef59083aa5becd774bf8ef35f3a6067" Jan 20 04:06:05 crc kubenswrapper[4898]: I0120 04:06:05.932034 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tq8nx" podStartSLOduration=2.383963529 podStartE2EDuration="7.932017591s" podCreationTimestamp="2026-01-20 04:05:58 +0000 UTC" firstStartedPulling="2026-01-20 04:05:59.762320018 +0000 UTC m=+1006.362107877" lastFinishedPulling="2026-01-20 04:06:05.31037408 +0000 UTC m=+1011.910161939" observedRunningTime="2026-01-20 04:06:05.931239796 +0000 UTC m=+1012.531027665" watchObservedRunningTime="2026-01-20 04:06:05.932017591 +0000 UTC m=+1012.531805440" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.021827 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9n4w\" (UniqueName: \"kubernetes.io/projected/457b496f-9f17-476b-bf51-30f948f83afb-kube-api-access-v9n4w\") pod \"457b496f-9f17-476b-bf51-30f948f83afb\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.022009 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-nb\") pod \"457b496f-9f17-476b-bf51-30f948f83afb\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.022053 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-config\") pod \"457b496f-9f17-476b-bf51-30f948f83afb\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.022152 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-dns-svc\") pod \"457b496f-9f17-476b-bf51-30f948f83afb\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.022192 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-sb\") pod \"457b496f-9f17-476b-bf51-30f948f83afb\" (UID: \"457b496f-9f17-476b-bf51-30f948f83afb\") " Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.028264 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457b496f-9f17-476b-bf51-30f948f83afb-kube-api-access-v9n4w" (OuterVolumeSpecName: "kube-api-access-v9n4w") pod "457b496f-9f17-476b-bf51-30f948f83afb" (UID: "457b496f-9f17-476b-bf51-30f948f83afb"). InnerVolumeSpecName "kube-api-access-v9n4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.073798 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-config" (OuterVolumeSpecName: "config") pod "457b496f-9f17-476b-bf51-30f948f83afb" (UID: "457b496f-9f17-476b-bf51-30f948f83afb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.079966 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "457b496f-9f17-476b-bf51-30f948f83afb" (UID: "457b496f-9f17-476b-bf51-30f948f83afb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.092579 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "457b496f-9f17-476b-bf51-30f948f83afb" (UID: "457b496f-9f17-476b-bf51-30f948f83afb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.092618 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "457b496f-9f17-476b-bf51-30f948f83afb" (UID: "457b496f-9f17-476b-bf51-30f948f83afb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.124379 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.124449 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.124464 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.124475 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/457b496f-9f17-476b-bf51-30f948f83afb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.124510 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9n4w\" (UniqueName: \"kubernetes.io/projected/457b496f-9f17-476b-bf51-30f948f83afb-kube-api-access-v9n4w\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.922545 4898 generic.go:334] "Generic (PLEG): container finished" podID="10755ec2-23ba-4fea-852a-546e494f98df" containerID="dc767f467ad644b7c118469ebba23adc2d68db5c0aebe2cffff58b3db28b8683" exitCode=0 Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.922629 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7gkfn" event={"ID":"10755ec2-23ba-4fea-852a-546e494f98df","Type":"ContainerDied","Data":"dc767f467ad644b7c118469ebba23adc2d68db5c0aebe2cffff58b3db28b8683"} Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.922843 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qp8ts" Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.976825 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qp8ts"] Jan 20 04:06:06 crc kubenswrapper[4898]: I0120 04:06:06.985186 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qp8ts"] Jan 20 04:06:07 crc kubenswrapper[4898]: I0120 04:06:07.740562 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="457b496f-9f17-476b-bf51-30f948f83afb" path="/var/lib/kubelet/pods/457b496f-9f17-476b-bf51-30f948f83afb/volumes" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.392667 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.571017 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txw9q\" (UniqueName: \"kubernetes.io/projected/10755ec2-23ba-4fea-852a-546e494f98df-kube-api-access-txw9q\") pod \"10755ec2-23ba-4fea-852a-546e494f98df\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.571217 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-config-data\") pod \"10755ec2-23ba-4fea-852a-546e494f98df\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.571246 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-combined-ca-bundle\") pod \"10755ec2-23ba-4fea-852a-546e494f98df\" (UID: \"10755ec2-23ba-4fea-852a-546e494f98df\") " Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.577673 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10755ec2-23ba-4fea-852a-546e494f98df-kube-api-access-txw9q" (OuterVolumeSpecName: "kube-api-access-txw9q") pod "10755ec2-23ba-4fea-852a-546e494f98df" (UID: "10755ec2-23ba-4fea-852a-546e494f98df"). InnerVolumeSpecName "kube-api-access-txw9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.596305 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10755ec2-23ba-4fea-852a-546e494f98df" (UID: "10755ec2-23ba-4fea-852a-546e494f98df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.614900 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-config-data" (OuterVolumeSpecName: "config-data") pod "10755ec2-23ba-4fea-852a-546e494f98df" (UID: "10755ec2-23ba-4fea-852a-546e494f98df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.673452 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txw9q\" (UniqueName: \"kubernetes.io/projected/10755ec2-23ba-4fea-852a-546e494f98df-kube-api-access-txw9q\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.673487 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.673498 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10755ec2-23ba-4fea-852a-546e494f98df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.739265 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.739596 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.782070 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.941110 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7gkfn" event={"ID":"10755ec2-23ba-4fea-852a-546e494f98df","Type":"ContainerDied","Data":"1f86389dda4d9f073fb5d7393d0c958c88fae94d9404721d9476744ca5ec350c"} Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.941174 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f86389dda4d9f073fb5d7393d0c958c88fae94d9404721d9476744ca5ec350c" Jan 20 04:06:08 crc kubenswrapper[4898]: I0120 04:06:08.941149 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7gkfn" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.201043 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-q8rfg"] Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.201820 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb0f751-8705-49a0-9d9a-67633e2f0379" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.201847 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb0f751-8705-49a0-9d9a-67633e2f0379" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.201872 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8a45e8-91bc-40b8-9a92-b8f82709a03a" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.201882 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8a45e8-91bc-40b8-9a92-b8f82709a03a" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.201902 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457b496f-9f17-476b-bf51-30f948f83afb" containerName="dnsmasq-dns" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.201910 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="457b496f-9f17-476b-bf51-30f948f83afb" containerName="dnsmasq-dns" Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.201927 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14bc956-d554-4fef-be24-28d68be49afe" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.201936 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14bc956-d554-4fef-be24-28d68be49afe" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.201950 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3e17d9-6103-4600-8159-178bcefd2c84" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.201957 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3e17d9-6103-4600-8159-178bcefd2c84" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.201968 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e099701-df46-48de-883e-65d209f81af0" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.201976 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e099701-df46-48de-883e-65d209f81af0" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.201988 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c91d56-5dc8-4607-aad0-85214357b977" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.201996 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c91d56-5dc8-4607-aad0-85214357b977" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.202004 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e4d258-a008-4a05-ae27-1d5c03654aa2" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202011 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e4d258-a008-4a05-ae27-1d5c03654aa2" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.202022 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10755ec2-23ba-4fea-852a-546e494f98df" containerName="keystone-db-sync" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202029 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="10755ec2-23ba-4fea-852a-546e494f98df" containerName="keystone-db-sync" Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.202037 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457b496f-9f17-476b-bf51-30f948f83afb" containerName="init" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202045 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="457b496f-9f17-476b-bf51-30f948f83afb" containerName="init" Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.202065 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b590651-4b56-4cf7-8374-c8fe0c8b26e5" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202074 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b590651-4b56-4cf7-8374-c8fe0c8b26e5" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: E0120 04:06:09.202086 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="710b53e5-5753-4b12-b02e-516fc4b2ed8f" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202093 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="710b53e5-5753-4b12-b02e-516fc4b2ed8f" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202289 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="457b496f-9f17-476b-bf51-30f948f83afb" containerName="dnsmasq-dns" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202305 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b590651-4b56-4cf7-8374-c8fe0c8b26e5" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202324 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3e17d9-6103-4600-8159-178bcefd2c84" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202336 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="10755ec2-23ba-4fea-852a-546e494f98df" containerName="keystone-db-sync" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202349 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e4d258-a008-4a05-ae27-1d5c03654aa2" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202357 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e099701-df46-48de-883e-65d209f81af0" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202367 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8a45e8-91bc-40b8-9a92-b8f82709a03a" containerName="mariadb-database-create" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202380 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="710b53e5-5753-4b12-b02e-516fc4b2ed8f" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202390 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14bc956-d554-4fef-be24-28d68be49afe" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202403 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb0f751-8705-49a0-9d9a-67633e2f0379" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.202411 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c91d56-5dc8-4607-aad0-85214357b977" containerName="mariadb-account-create-update" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.209256 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.219459 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-q8rfg"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.244504 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-n4zj8"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.246207 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.252263 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.252423 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.253422 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.255620 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.258367 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t7cgs" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.287957 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n4zj8"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.348526 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-rdd4p"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.349629 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rdd4p" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.353184 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.353464 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-mtg7q" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.361066 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rdd4p"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.395781 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-scripts\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.395823 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-combined-ca-bundle\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.395859 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85s28\" (UniqueName: \"kubernetes.io/projected/717fd193-b548-41c7-bea0-b43fb73e3535-kube-api-access-85s28\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.395884 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.395906 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htk2k\" (UniqueName: \"kubernetes.io/projected/9776970f-ee91-4d2c-ab58-2736f541c2f0-kube-api-access-htk2k\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.395927 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-fernet-keys\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.395970 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-credential-keys\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.395996 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.396016 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.396038 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-config\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.396058 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.396082 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-config-data\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.464523 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9ws79"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.465560 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.479051 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.479224 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.485707 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rs9pl" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501236 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-config-data\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501289 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-config-data\") pod \"heat-db-sync-rdd4p\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " pod="openstack/heat-db-sync-rdd4p" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501325 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-scripts\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501341 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-combined-ca-bundle\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501375 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-combined-ca-bundle\") pod \"heat-db-sync-rdd4p\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " pod="openstack/heat-db-sync-rdd4p" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501392 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85s28\" (UniqueName: \"kubernetes.io/projected/717fd193-b548-41c7-bea0-b43fb73e3535-kube-api-access-85s28\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501417 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501456 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htk2k\" (UniqueName: \"kubernetes.io/projected/9776970f-ee91-4d2c-ab58-2736f541c2f0-kube-api-access-htk2k\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501475 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-fernet-keys\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501521 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-credential-keys\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501541 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501563 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501591 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-config\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501623 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nxgf\" (UniqueName: \"kubernetes.io/projected/ad38dd1a-677c-4db0-b349-684b1ca42820-kube-api-access-9nxgf\") pod \"heat-db-sync-rdd4p\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " pod="openstack/heat-db-sync-rdd4p" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.501646 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.502446 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.505181 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.511342 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9ws79"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.512251 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.512860 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.513459 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-config\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.514571 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-credential-keys\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.519915 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-scripts\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.522355 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-fernet-keys\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.525228 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-config-data\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.529030 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-combined-ca-bundle\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.597496 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pfvjb"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.599156 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.603733 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8b58\" (UniqueName: \"kubernetes.io/projected/9e533f97-e194-486f-9125-b29cf19e6648-kube-api-access-z8b58\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.603824 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-scripts\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.603861 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nxgf\" (UniqueName: \"kubernetes.io/projected/ad38dd1a-677c-4db0-b349-684b1ca42820-kube-api-access-9nxgf\") pod \"heat-db-sync-rdd4p\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " pod="openstack/heat-db-sync-rdd4p" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.603879 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-db-sync-config-data\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.603897 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-combined-ca-bundle\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.603926 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-config-data\") pod \"heat-db-sync-rdd4p\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " pod="openstack/heat-db-sync-rdd4p" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.603945 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-config-data\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.603980 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e533f97-e194-486f-9125-b29cf19e6648-etc-machine-id\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.604001 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-combined-ca-bundle\") pod \"heat-db-sync-rdd4p\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " pod="openstack/heat-db-sync-rdd4p" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.617681 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85s28\" (UniqueName: \"kubernetes.io/projected/717fd193-b548-41c7-bea0-b43fb73e3535-kube-api-access-85s28\") pod \"keystone-bootstrap-n4zj8\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.621168 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-config-data\") pod \"heat-db-sync-rdd4p\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " pod="openstack/heat-db-sync-rdd4p" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.633588 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htk2k\" (UniqueName: \"kubernetes.io/projected/9776970f-ee91-4d2c-ab58-2736f541c2f0-kube-api-access-htk2k\") pod \"dnsmasq-dns-bbf5cc879-q8rfg\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.659528 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pfvjb"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.677136 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-combined-ca-bundle\") pod \"heat-db-sync-rdd4p\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " pod="openstack/heat-db-sync-rdd4p" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.679678 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sppjk"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.689445 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.703362 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ng5wv" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.710333 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-utilities\") pod \"community-operators-pfvjb\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.710389 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfxmd\" (UniqueName: \"kubernetes.io/projected/456251ef-04f7-4abb-bbc3-2270ce7c33a6-kube-api-access-xfxmd\") pod \"community-operators-pfvjb\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.710424 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-config-data\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.710491 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e533f97-e194-486f-9125-b29cf19e6648-etc-machine-id\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.710531 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8b58\" (UniqueName: \"kubernetes.io/projected/9e533f97-e194-486f-9125-b29cf19e6648-kube-api-access-z8b58\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.710597 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-scripts\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.710646 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-db-sync-config-data\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.710669 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-combined-ca-bundle\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.710701 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-catalog-content\") pod \"community-operators-pfvjb\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.710991 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e533f97-e194-486f-9125-b29cf19e6648-etc-machine-id\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.711873 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.725568 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nxgf\" (UniqueName: \"kubernetes.io/projected/ad38dd1a-677c-4db0-b349-684b1ca42820-kube-api-access-9nxgf\") pod \"heat-db-sync-rdd4p\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " pod="openstack/heat-db-sync-rdd4p" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.726127 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-scripts\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.726779 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-combined-ca-bundle\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.727171 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.727667 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-db-sync-config-data\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.739621 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-config-data\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.747133 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8b58\" (UniqueName: \"kubernetes.io/projected/9e533f97-e194-486f-9125-b29cf19e6648-kube-api-access-z8b58\") pod \"cinder-db-sync-9ws79\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.769536 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sppjk"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.788869 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ws79" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.791070 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.792961 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.794826 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.795056 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.813002 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-catalog-content\") pod \"community-operators-pfvjb\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.813060 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-utilities\") pod \"community-operators-pfvjb\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.813083 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfxmd\" (UniqueName: \"kubernetes.io/projected/456251ef-04f7-4abb-bbc3-2270ce7c33a6-kube-api-access-xfxmd\") pod \"community-operators-pfvjb\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.813114 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dpcg\" (UniqueName: \"kubernetes.io/projected/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-kube-api-access-5dpcg\") pod \"neutron-db-sync-sppjk\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.813144 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-config\") pod \"neutron-db-sync-sppjk\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.813249 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-combined-ca-bundle\") pod \"neutron-db-sync-sppjk\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.813803 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-catalog-content\") pod \"community-operators-pfvjb\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.814031 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-utilities\") pod \"community-operators-pfvjb\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.828364 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.833727 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-vvf7g"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.834730 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.842794 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.842984 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.843150 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ldtwr" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.850822 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.860937 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfxmd\" (UniqueName: \"kubernetes.io/projected/456251ef-04f7-4abb-bbc3-2270ce7c33a6-kube-api-access-xfxmd\") pod \"community-operators-pfvjb\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.864517 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.865136 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.879956 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-q8rfg"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.918226 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpcg\" (UniqueName: \"kubernetes.io/projected/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-kube-api-access-5dpcg\") pod \"neutron-db-sync-sppjk\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.918313 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-config\") pod \"neutron-db-sync-sppjk\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.918415 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-scripts\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.918484 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-run-httpd\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.918551 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.918617 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-config-data\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.918703 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.918763 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-combined-ca-bundle\") pod \"neutron-db-sync-sppjk\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.918798 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr2zk\" (UniqueName: \"kubernetes.io/projected/635aa30f-d711-43d2-906b-b951f5c6a9ad-kube-api-access-nr2zk\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.918843 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-log-httpd\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.925518 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-config\") pod \"neutron-db-sync-sppjk\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.925691 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-combined-ca-bundle\") pod \"neutron-db-sync-sppjk\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.943729 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dpcg\" (UniqueName: \"kubernetes.io/projected/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-kube-api-access-5dpcg\") pod \"neutron-db-sync-sppjk\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.965992 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vvf7g"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.975811 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.976198 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.976592 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-bwv29"] Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.976994 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rdd4p" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.979228 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.984991 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-txqlm" Jan 20 04:06:09 crc kubenswrapper[4898]: I0120 04:06:09.985175 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:09.991635 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bwv29"] Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.014409 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b6z25"] Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.016509 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.018701 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b6z25"] Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.036071 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-config-data\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037178 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037234 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037304 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-config\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037332 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-scripts\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037366 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr2zk\" (UniqueName: \"kubernetes.io/projected/635aa30f-d711-43d2-906b-b951f5c6a9ad-kube-api-access-nr2zk\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037423 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037451 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-log-httpd\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037597 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-scripts\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037633 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-run-httpd\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037675 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg46r\" (UniqueName: \"kubernetes.io/projected/bf594a05-0b51-41bc-b43d-ae25e2b98843-kube-api-access-wg46r\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037709 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb4wp\" (UniqueName: \"kubernetes.io/projected/f911b103-17f6-4caf-86f3-56f70295a884-kube-api-access-rb4wp\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037725 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037742 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037759 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037783 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f911b103-17f6-4caf-86f3-56f70295a884-logs\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037833 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-config-data\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.037859 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-combined-ca-bundle\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.045080 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-log-httpd\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.048865 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.053357 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-run-httpd\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.061921 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-config-data\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.074518 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.076512 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr2zk\" (UniqueName: \"kubernetes.io/projected/635aa30f-d711-43d2-906b-b951f5c6a9ad-kube-api-access-nr2zk\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.091270 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-scripts\") pod \"ceilometer-0\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.140774 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141169 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-db-sync-config-data\") pod \"barbican-db-sync-bwv29\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141193 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p857\" (UniqueName: \"kubernetes.io/projected/2333e11a-59ac-4a16-914a-e846f5fa04d7-kube-api-access-2p857\") pod \"barbican-db-sync-bwv29\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141263 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg46r\" (UniqueName: \"kubernetes.io/projected/bf594a05-0b51-41bc-b43d-ae25e2b98843-kube-api-access-wg46r\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141279 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-combined-ca-bundle\") pod \"barbican-db-sync-bwv29\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141300 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb4wp\" (UniqueName: \"kubernetes.io/projected/f911b103-17f6-4caf-86f3-56f70295a884-kube-api-access-rb4wp\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141317 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141334 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141354 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f911b103-17f6-4caf-86f3-56f70295a884-logs\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141397 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-combined-ca-bundle\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141443 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-config-data\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141461 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141489 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-config\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.141504 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-scripts\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.143323 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.143948 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.144203 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f911b103-17f6-4caf-86f3-56f70295a884-logs\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.144896 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-config\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.145009 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.145645 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.153504 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-config-data\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.165940 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb4wp\" (UniqueName: \"kubernetes.io/projected/f911b103-17f6-4caf-86f3-56f70295a884-kube-api-access-rb4wp\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.166485 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-combined-ca-bundle\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.169391 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-scripts\") pod \"placement-db-sync-vvf7g\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.170770 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg46r\" (UniqueName: \"kubernetes.io/projected/bf594a05-0b51-41bc-b43d-ae25e2b98843-kube-api-access-wg46r\") pod \"dnsmasq-dns-56df8fb6b7-b6z25\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.190835 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.201803 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.228522 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.244766 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-combined-ca-bundle\") pod \"barbican-db-sync-bwv29\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.244930 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-db-sync-config-data\") pod \"barbican-db-sync-bwv29\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.244949 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p857\" (UniqueName: \"kubernetes.io/projected/2333e11a-59ac-4a16-914a-e846f5fa04d7-kube-api-access-2p857\") pod \"barbican-db-sync-bwv29\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.268265 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p857\" (UniqueName: \"kubernetes.io/projected/2333e11a-59ac-4a16-914a-e846f5fa04d7-kube-api-access-2p857\") pod \"barbican-db-sync-bwv29\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.268353 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-db-sync-config-data\") pod \"barbican-db-sync-bwv29\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.286149 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-combined-ca-bundle\") pod \"barbican-db-sync-bwv29\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.350123 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.391117 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.415196 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.418075 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.426540 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.428955 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.429206 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.429334 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-47npl" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.455782 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.534272 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9ws79"] Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.550220 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.550284 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.550321 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.550341 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.550363 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-logs\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.550393 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.550860 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.550906 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k79t\" (UniqueName: \"kubernetes.io/projected/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-kube-api-access-2k79t\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.642405 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.644270 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.655010 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.655196 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.656745 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.656800 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.656834 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.656856 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.656876 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-logs\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.656906 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.656944 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.656995 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k79t\" (UniqueName: \"kubernetes.io/projected/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-kube-api-access-2k79t\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.659604 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.661268 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.661508 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-logs\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.664618 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.670844 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.670974 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.676349 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.682847 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k79t\" (UniqueName: \"kubernetes.io/projected/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-kube-api-access-2k79t\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.716161 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.718830 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.758632 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.759296 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.759331 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-logs\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.759488 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.759512 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.759540 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.759749 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc62l\" (UniqueName: \"kubernetes.io/projected/f9355d0b-874f-4b9d-a48c-786180c6c94b-kube-api-access-gc62l\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.759812 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.759870 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.862537 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc62l\" (UniqueName: \"kubernetes.io/projected/f9355d0b-874f-4b9d-a48c-786180c6c94b-kube-api-access-gc62l\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.864479 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.864587 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.864824 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.864848 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-logs\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.865016 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.865038 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.865070 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.865585 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-logs\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.865823 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.865909 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.867862 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.867995 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.868951 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.871064 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.891925 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc62l\" (UniqueName: \"kubernetes.io/projected/f9355d0b-874f-4b9d-a48c-786180c6c94b-kube-api-access-gc62l\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: W0120 04:06:10.904394 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9776970f_ee91_4d2c_ab58_2736f541c2f0.slice/crio-3bb246b29e33332d7c5398fe288c3b9500eb798446c40a40cd8989938bc235c4 WatchSource:0}: Error finding container 3bb246b29e33332d7c5398fe288c3b9500eb798446c40a40cd8989938bc235c4: Status 404 returned error can't find the container with id 3bb246b29e33332d7c5398fe288c3b9500eb798446c40a40cd8989938bc235c4 Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.908367 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-q8rfg"] Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.915066 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pfvjb"] Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.917484 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:10 crc kubenswrapper[4898]: I0120 04:06:10.999759 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfvjb" event={"ID":"456251ef-04f7-4abb-bbc3-2270ce7c33a6","Type":"ContainerStarted","Data":"a17b76b8db4314ba69e94efcc8a2544e961dfb4cc8b45bc89c5e1563ea40c1f0"} Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.002044 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ws79" event={"ID":"9e533f97-e194-486f-9125-b29cf19e6648","Type":"ContainerStarted","Data":"3cafbcd20d24961c32820a7f2ba6fcc9e99b4429625b91635a54df79c36aefd4"} Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.004659 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n4zj8"] Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.005674 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" event={"ID":"9776970f-ee91-4d2c-ab58-2736f541c2f0","Type":"ContainerStarted","Data":"3bb246b29e33332d7c5398fe288c3b9500eb798446c40a40cd8989938bc235c4"} Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.031207 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rdd4p"] Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.033728 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.221551 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sppjk"] Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.237120 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vvf7g"] Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.245868 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:06:11 crc kubenswrapper[4898]: W0120 04:06:11.298788 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod635aa30f_d711_43d2_906b_b951f5c6a9ad.slice/crio-83b5f75969d9259b65fd610041ec7cf663a1c23517984366459c121ece269ce1 WatchSource:0}: Error finding container 83b5f75969d9259b65fd610041ec7cf663a1c23517984366459c121ece269ce1: Status 404 returned error can't find the container with id 83b5f75969d9259b65fd610041ec7cf663a1c23517984366459c121ece269ce1 Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.490692 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b6z25"] Jan 20 04:06:11 crc kubenswrapper[4898]: W0120 04:06:11.505212 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf594a05_0b51_41bc_b43d_ae25e2b98843.slice/crio-fec6f46ff0171396ee5d83bdc8c2a53e084cca058a9d2de469ac3d7e94c75076 WatchSource:0}: Error finding container fec6f46ff0171396ee5d83bdc8c2a53e084cca058a9d2de469ac3d7e94c75076: Status 404 returned error can't find the container with id fec6f46ff0171396ee5d83bdc8c2a53e084cca058a9d2de469ac3d7e94c75076 Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.510232 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bwv29"] Jan 20 04:06:11 crc kubenswrapper[4898]: W0120 04:06:11.536396 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2333e11a_59ac_4a16_914a_e846f5fa04d7.slice/crio-8cdf2cde048fd3d9e3e28eb862d0a4d8aa9444735c2b6eb0f5de366075dee41a WatchSource:0}: Error finding container 8cdf2cde048fd3d9e3e28eb862d0a4d8aa9444735c2b6eb0f5de366075dee41a: Status 404 returned error can't find the container with id 8cdf2cde048fd3d9e3e28eb862d0a4d8aa9444735c2b6eb0f5de366075dee41a Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.709248 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:06:11 crc kubenswrapper[4898]: W0120 04:06:11.740289 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod014f3c64_ad47_4324_9cb9_2212e8ea2dc0.slice/crio-98e590d57b6ce4f34dfba30c67ac70c6c61ccd0a477899d67f1f9f263941f840 WatchSource:0}: Error finding container 98e590d57b6ce4f34dfba30c67ac70c6c61ccd0a477899d67f1f9f263941f840: Status 404 returned error can't find the container with id 98e590d57b6ce4f34dfba30c67ac70c6c61ccd0a477899d67f1f9f263941f840 Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.754156 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.840650 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.884030 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:06:11 crc kubenswrapper[4898]: I0120 04:06:11.893723 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.021169 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sppjk" event={"ID":"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3","Type":"ContainerStarted","Data":"bd0b80f057185241fc9b6a60514cadf528712a69597d8a45d6056e73dfbd2024"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.021759 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sppjk" event={"ID":"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3","Type":"ContainerStarted","Data":"2f0667b874e0bc766b0a44fc29108ce9ce1569350f5dccbb0dbc9f328493abb7"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.030594 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bwv29" event={"ID":"2333e11a-59ac-4a16-914a-e846f5fa04d7","Type":"ContainerStarted","Data":"8cdf2cde048fd3d9e3e28eb862d0a4d8aa9444735c2b6eb0f5de366075dee41a"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.032500 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"635aa30f-d711-43d2-906b-b951f5c6a9ad","Type":"ContainerStarted","Data":"83b5f75969d9259b65fd610041ec7cf663a1c23517984366459c121ece269ce1"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.045321 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rdd4p" event={"ID":"ad38dd1a-677c-4db0-b349-684b1ca42820","Type":"ContainerStarted","Data":"ddd6d6700b1a8452b575a6da8816531bf3c8b80f75ad23cca6fd96e19e6bd782"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.046758 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sppjk" podStartSLOduration=3.046732905 podStartE2EDuration="3.046732905s" podCreationTimestamp="2026-01-20 04:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:12.03957783 +0000 UTC m=+1018.639365689" watchObservedRunningTime="2026-01-20 04:06:12.046732905 +0000 UTC m=+1018.646520764" Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.050499 4898 generic.go:334] "Generic (PLEG): container finished" podID="bf594a05-0b51-41bc-b43d-ae25e2b98843" containerID="d0a0e075d2d7ebdf7f846f78419c67c8bb005d8862bc3e038226adc475c5a5a9" exitCode=0 Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.050579 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" event={"ID":"bf594a05-0b51-41bc-b43d-ae25e2b98843","Type":"ContainerDied","Data":"d0a0e075d2d7ebdf7f846f78419c67c8bb005d8862bc3e038226adc475c5a5a9"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.050609 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" event={"ID":"bf594a05-0b51-41bc-b43d-ae25e2b98843","Type":"ContainerStarted","Data":"fec6f46ff0171396ee5d83bdc8c2a53e084cca058a9d2de469ac3d7e94c75076"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.058226 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"014f3c64-ad47-4324-9cb9-2212e8ea2dc0","Type":"ContainerStarted","Data":"98e590d57b6ce4f34dfba30c67ac70c6c61ccd0a477899d67f1f9f263941f840"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.108201 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vvf7g" event={"ID":"f911b103-17f6-4caf-86f3-56f70295a884","Type":"ContainerStarted","Data":"d0a1a1fa9ea516e3f348f4bc97edf285bc61414eb8a12c6816b1da4b42c71e64"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.117708 4898 generic.go:334] "Generic (PLEG): container finished" podID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerID="2cfdf5a8fc3c264b956b0acd5d6caab0749d34aa6986413f6961de23d84c95ad" exitCode=0 Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.117799 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfvjb" event={"ID":"456251ef-04f7-4abb-bbc3-2270ce7c33a6","Type":"ContainerDied","Data":"2cfdf5a8fc3c264b956b0acd5d6caab0749d34aa6986413f6961de23d84c95ad"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.131972 4898 generic.go:334] "Generic (PLEG): container finished" podID="9776970f-ee91-4d2c-ab58-2736f541c2f0" containerID="1b28a35099011e7b7f012989f0ab5c460d95ca0eabe05b7a30d93f7bea934803" exitCode=0 Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.132040 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" event={"ID":"9776970f-ee91-4d2c-ab58-2736f541c2f0","Type":"ContainerDied","Data":"1b28a35099011e7b7f012989f0ab5c460d95ca0eabe05b7a30d93f7bea934803"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.133727 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9355d0b-874f-4b9d-a48c-786180c6c94b","Type":"ContainerStarted","Data":"344e5738c5ef4286515ea4cb700eb829a7eb5f596422bb96bcc6e2d96d028c55"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.143907 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n4zj8" event={"ID":"717fd193-b548-41c7-bea0-b43fb73e3535","Type":"ContainerStarted","Data":"435c71510aebeddd4d19a7b21833135efc5e312faca852b46cd8a74af230ce5e"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.143967 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n4zj8" event={"ID":"717fd193-b548-41c7-bea0-b43fb73e3535","Type":"ContainerStarted","Data":"0fa3650d216a4890132d8f5c77ba4410c00c083fa4a9b937c6b18b0bb271ebee"} Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.172209 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-n4zj8" podStartSLOduration=3.172175071 podStartE2EDuration="3.172175071s" podCreationTimestamp="2026-01-20 04:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:12.168801714 +0000 UTC m=+1018.768589563" watchObservedRunningTime="2026-01-20 04:06:12.172175071 +0000 UTC m=+1018.771962930" Jan 20 04:06:12 crc kubenswrapper[4898]: I0120 04:06:12.852717 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.016415 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htk2k\" (UniqueName: \"kubernetes.io/projected/9776970f-ee91-4d2c-ab58-2736f541c2f0-kube-api-access-htk2k\") pod \"9776970f-ee91-4d2c-ab58-2736f541c2f0\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.016555 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-svc\") pod \"9776970f-ee91-4d2c-ab58-2736f541c2f0\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.016606 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-swift-storage-0\") pod \"9776970f-ee91-4d2c-ab58-2736f541c2f0\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.017121 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-sb\") pod \"9776970f-ee91-4d2c-ab58-2736f541c2f0\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.017218 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-nb\") pod \"9776970f-ee91-4d2c-ab58-2736f541c2f0\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.017262 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-config\") pod \"9776970f-ee91-4d2c-ab58-2736f541c2f0\" (UID: \"9776970f-ee91-4d2c-ab58-2736f541c2f0\") " Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.072140 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9776970f-ee91-4d2c-ab58-2736f541c2f0-kube-api-access-htk2k" (OuterVolumeSpecName: "kube-api-access-htk2k") pod "9776970f-ee91-4d2c-ab58-2736f541c2f0" (UID: "9776970f-ee91-4d2c-ab58-2736f541c2f0"). InnerVolumeSpecName "kube-api-access-htk2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.081062 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-config" (OuterVolumeSpecName: "config") pod "9776970f-ee91-4d2c-ab58-2736f541c2f0" (UID: "9776970f-ee91-4d2c-ab58-2736f541c2f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.082588 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9776970f-ee91-4d2c-ab58-2736f541c2f0" (UID: "9776970f-ee91-4d2c-ab58-2736f541c2f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.095504 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9776970f-ee91-4d2c-ab58-2736f541c2f0" (UID: "9776970f-ee91-4d2c-ab58-2736f541c2f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.097644 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9776970f-ee91-4d2c-ab58-2736f541c2f0" (UID: "9776970f-ee91-4d2c-ab58-2736f541c2f0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.106315 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9776970f-ee91-4d2c-ab58-2736f541c2f0" (UID: "9776970f-ee91-4d2c-ab58-2736f541c2f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.124643 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htk2k\" (UniqueName: \"kubernetes.io/projected/9776970f-ee91-4d2c-ab58-2736f541c2f0-kube-api-access-htk2k\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.124681 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.124692 4898 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.124701 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.124709 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.124717 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9776970f-ee91-4d2c-ab58-2736f541c2f0-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.167408 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" event={"ID":"9776970f-ee91-4d2c-ab58-2736f541c2f0","Type":"ContainerDied","Data":"3bb246b29e33332d7c5398fe288c3b9500eb798446c40a40cd8989938bc235c4"} Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.167476 4898 scope.go:117] "RemoveContainer" containerID="1b28a35099011e7b7f012989f0ab5c460d95ca0eabe05b7a30d93f7bea934803" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.167587 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-q8rfg" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.207590 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" event={"ID":"bf594a05-0b51-41bc-b43d-ae25e2b98843","Type":"ContainerStarted","Data":"3d8931377e42d941f048ba2a6fab1b41744ebb803dc18ec98bb417df04719674"} Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.207836 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.236400 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"014f3c64-ad47-4324-9cb9-2212e8ea2dc0","Type":"ContainerStarted","Data":"a68444f46883d76c5e4c8517c9afcf6b211d7cd5b8a79a269bb343e5f94f87a7"} Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.254560 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-q8rfg"] Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.266371 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" podStartSLOduration=4.266353423 podStartE2EDuration="4.266353423s" podCreationTimestamp="2026-01-20 04:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:13.250130763 +0000 UTC m=+1019.849918622" watchObservedRunningTime="2026-01-20 04:06:13.266353423 +0000 UTC m=+1019.866141282" Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.276575 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-q8rfg"] Jan 20 04:06:13 crc kubenswrapper[4898]: I0120 04:06:13.785224 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9776970f-ee91-4d2c-ab58-2736f541c2f0" path="/var/lib/kubelet/pods/9776970f-ee91-4d2c-ab58-2736f541c2f0/volumes" Jan 20 04:06:14 crc kubenswrapper[4898]: I0120 04:06:14.259640 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9355d0b-874f-4b9d-a48c-786180c6c94b","Type":"ContainerStarted","Data":"7262cd00442d615cbb3007b4588ea7855500339fc23e751f0d42a63b49f30456"} Jan 20 04:06:14 crc kubenswrapper[4898]: I0120 04:06:14.277651 4898 generic.go:334] "Generic (PLEG): container finished" podID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerID="08fe7a52e54573a47e823db6efcd68e08ad2a4318425c6cc1850ac63e2d5f2e8" exitCode=0 Jan 20 04:06:14 crc kubenswrapper[4898]: I0120 04:06:14.277750 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfvjb" event={"ID":"456251ef-04f7-4abb-bbc3-2270ce7c33a6","Type":"ContainerDied","Data":"08fe7a52e54573a47e823db6efcd68e08ad2a4318425c6cc1850ac63e2d5f2e8"} Jan 20 04:06:15 crc kubenswrapper[4898]: I0120 04:06:15.324224 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"014f3c64-ad47-4324-9cb9-2212e8ea2dc0","Type":"ContainerStarted","Data":"486b70813865b28b59b24adf61b4b7d703873fb5d17c9a5c421f104dfe1448a3"} Jan 20 04:06:15 crc kubenswrapper[4898]: I0120 04:06:15.324281 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="014f3c64-ad47-4324-9cb9-2212e8ea2dc0" containerName="glance-log" containerID="cri-o://a68444f46883d76c5e4c8517c9afcf6b211d7cd5b8a79a269bb343e5f94f87a7" gracePeriod=30 Jan 20 04:06:15 crc kubenswrapper[4898]: I0120 04:06:15.324666 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="014f3c64-ad47-4324-9cb9-2212e8ea2dc0" containerName="glance-httpd" containerID="cri-o://486b70813865b28b59b24adf61b4b7d703873fb5d17c9a5c421f104dfe1448a3" gracePeriod=30 Jan 20 04:06:15 crc kubenswrapper[4898]: I0120 04:06:15.326873 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9355d0b-874f-4b9d-a48c-786180c6c94b","Type":"ContainerStarted","Data":"732c5ee194da6712665164211e61a7a8d3daaa1fa46b43da78ed25b06de5380a"} Jan 20 04:06:15 crc kubenswrapper[4898]: I0120 04:06:15.326935 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f9355d0b-874f-4b9d-a48c-786180c6c94b" containerName="glance-log" containerID="cri-o://7262cd00442d615cbb3007b4588ea7855500339fc23e751f0d42a63b49f30456" gracePeriod=30 Jan 20 04:06:15 crc kubenswrapper[4898]: I0120 04:06:15.326976 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f9355d0b-874f-4b9d-a48c-786180c6c94b" containerName="glance-httpd" containerID="cri-o://732c5ee194da6712665164211e61a7a8d3daaa1fa46b43da78ed25b06de5380a" gracePeriod=30 Jan 20 04:06:15 crc kubenswrapper[4898]: I0120 04:06:15.352289 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.352263007 podStartE2EDuration="6.352263007s" podCreationTimestamp="2026-01-20 04:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:15.346192866 +0000 UTC m=+1021.945980715" watchObservedRunningTime="2026-01-20 04:06:15.352263007 +0000 UTC m=+1021.952050866" Jan 20 04:06:15 crc kubenswrapper[4898]: I0120 04:06:15.375453 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.375416076 podStartE2EDuration="6.375416076s" podCreationTimestamp="2026-01-20 04:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:15.368805728 +0000 UTC m=+1021.968593587" watchObservedRunningTime="2026-01-20 04:06:15.375416076 +0000 UTC m=+1021.975203935" Jan 20 04:06:16 crc kubenswrapper[4898]: I0120 04:06:16.343637 4898 generic.go:334] "Generic (PLEG): container finished" podID="014f3c64-ad47-4324-9cb9-2212e8ea2dc0" containerID="486b70813865b28b59b24adf61b4b7d703873fb5d17c9a5c421f104dfe1448a3" exitCode=0 Jan 20 04:06:16 crc kubenswrapper[4898]: I0120 04:06:16.343961 4898 generic.go:334] "Generic (PLEG): container finished" podID="014f3c64-ad47-4324-9cb9-2212e8ea2dc0" containerID="a68444f46883d76c5e4c8517c9afcf6b211d7cd5b8a79a269bb343e5f94f87a7" exitCode=143 Jan 20 04:06:16 crc kubenswrapper[4898]: I0120 04:06:16.343727 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"014f3c64-ad47-4324-9cb9-2212e8ea2dc0","Type":"ContainerDied","Data":"486b70813865b28b59b24adf61b4b7d703873fb5d17c9a5c421f104dfe1448a3"} Jan 20 04:06:16 crc kubenswrapper[4898]: I0120 04:06:16.344055 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"014f3c64-ad47-4324-9cb9-2212e8ea2dc0","Type":"ContainerDied","Data":"a68444f46883d76c5e4c8517c9afcf6b211d7cd5b8a79a269bb343e5f94f87a7"} Jan 20 04:06:16 crc kubenswrapper[4898]: I0120 04:06:16.351033 4898 generic.go:334] "Generic (PLEG): container finished" podID="f9355d0b-874f-4b9d-a48c-786180c6c94b" containerID="732c5ee194da6712665164211e61a7a8d3daaa1fa46b43da78ed25b06de5380a" exitCode=0 Jan 20 04:06:16 crc kubenswrapper[4898]: I0120 04:06:16.351068 4898 generic.go:334] "Generic (PLEG): container finished" podID="f9355d0b-874f-4b9d-a48c-786180c6c94b" containerID="7262cd00442d615cbb3007b4588ea7855500339fc23e751f0d42a63b49f30456" exitCode=143 Jan 20 04:06:16 crc kubenswrapper[4898]: I0120 04:06:16.351092 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9355d0b-874f-4b9d-a48c-786180c6c94b","Type":"ContainerDied","Data":"732c5ee194da6712665164211e61a7a8d3daaa1fa46b43da78ed25b06de5380a"} Jan 20 04:06:16 crc kubenswrapper[4898]: I0120 04:06:16.351117 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9355d0b-874f-4b9d-a48c-786180c6c94b","Type":"ContainerDied","Data":"7262cd00442d615cbb3007b4588ea7855500339fc23e751f0d42a63b49f30456"} Jan 20 04:06:17 crc kubenswrapper[4898]: I0120 04:06:17.360725 4898 generic.go:334] "Generic (PLEG): container finished" podID="717fd193-b548-41c7-bea0-b43fb73e3535" containerID="435c71510aebeddd4d19a7b21833135efc5e312faca852b46cd8a74af230ce5e" exitCode=0 Jan 20 04:06:17 crc kubenswrapper[4898]: I0120 04:06:17.360771 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n4zj8" event={"ID":"717fd193-b548-41c7-bea0-b43fb73e3535","Type":"ContainerDied","Data":"435c71510aebeddd4d19a7b21833135efc5e312faca852b46cd8a74af230ce5e"} Jan 20 04:06:18 crc kubenswrapper[4898]: I0120 04:06:18.802657 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:06:18 crc kubenswrapper[4898]: I0120 04:06:18.854860 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tq8nx"] Jan 20 04:06:19 crc kubenswrapper[4898]: I0120 04:06:19.377061 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tq8nx" podUID="c4018f65-d2f8-4e71-a417-1310469128c6" containerName="registry-server" containerID="cri-o://2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155" gracePeriod=2 Jan 20 04:06:20 crc kubenswrapper[4898]: I0120 04:06:20.391169 4898 generic.go:334] "Generic (PLEG): container finished" podID="c4018f65-d2f8-4e71-a417-1310469128c6" containerID="2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155" exitCode=0 Jan 20 04:06:20 crc kubenswrapper[4898]: I0120 04:06:20.391261 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq8nx" event={"ID":"c4018f65-d2f8-4e71-a417-1310469128c6","Type":"ContainerDied","Data":"2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155"} Jan 20 04:06:20 crc kubenswrapper[4898]: I0120 04:06:20.392636 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:06:20 crc kubenswrapper[4898]: I0120 04:06:20.478879 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w46nx"] Jan 20 04:06:20 crc kubenswrapper[4898]: I0120 04:06:20.479131 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" podUID="393c968e-aaea-4b5f-86ba-44ffa221b98e" containerName="dnsmasq-dns" containerID="cri-o://1b813b2d842ece85416dbd0c831e6e8ddc1cc59432f832b3735b9790d5eaf388" gracePeriod=10 Jan 20 04:06:21 crc kubenswrapper[4898]: I0120 04:06:21.401122 4898 generic.go:334] "Generic (PLEG): container finished" podID="393c968e-aaea-4b5f-86ba-44ffa221b98e" containerID="1b813b2d842ece85416dbd0c831e6e8ddc1cc59432f832b3735b9790d5eaf388" exitCode=0 Jan 20 04:06:21 crc kubenswrapper[4898]: I0120 04:06:21.401226 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" event={"ID":"393c968e-aaea-4b5f-86ba-44ffa221b98e","Type":"ContainerDied","Data":"1b813b2d842ece85416dbd0c831e6e8ddc1cc59432f832b3735b9790d5eaf388"} Jan 20 04:06:25 crc kubenswrapper[4898]: I0120 04:06:25.316234 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" podUID="393c968e-aaea-4b5f-86ba-44ffa221b98e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 20 04:06:26 crc kubenswrapper[4898]: I0120 04:06:26.973646 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.169810 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-scripts\") pod \"717fd193-b548-41c7-bea0-b43fb73e3535\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.172184 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-config-data\") pod \"717fd193-b548-41c7-bea0-b43fb73e3535\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.172228 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85s28\" (UniqueName: \"kubernetes.io/projected/717fd193-b548-41c7-bea0-b43fb73e3535-kube-api-access-85s28\") pod \"717fd193-b548-41c7-bea0-b43fb73e3535\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.172265 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-combined-ca-bundle\") pod \"717fd193-b548-41c7-bea0-b43fb73e3535\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.172289 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-fernet-keys\") pod \"717fd193-b548-41c7-bea0-b43fb73e3535\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.172390 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-credential-keys\") pod \"717fd193-b548-41c7-bea0-b43fb73e3535\" (UID: \"717fd193-b548-41c7-bea0-b43fb73e3535\") " Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.181260 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-scripts" (OuterVolumeSpecName: "scripts") pod "717fd193-b548-41c7-bea0-b43fb73e3535" (UID: "717fd193-b548-41c7-bea0-b43fb73e3535"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.181330 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/717fd193-b548-41c7-bea0-b43fb73e3535-kube-api-access-85s28" (OuterVolumeSpecName: "kube-api-access-85s28") pod "717fd193-b548-41c7-bea0-b43fb73e3535" (UID: "717fd193-b548-41c7-bea0-b43fb73e3535"). InnerVolumeSpecName "kube-api-access-85s28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.182569 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "717fd193-b548-41c7-bea0-b43fb73e3535" (UID: "717fd193-b548-41c7-bea0-b43fb73e3535"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.183624 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "717fd193-b548-41c7-bea0-b43fb73e3535" (UID: "717fd193-b548-41c7-bea0-b43fb73e3535"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.217665 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "717fd193-b548-41c7-bea0-b43fb73e3535" (UID: "717fd193-b548-41c7-bea0-b43fb73e3535"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.228237 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-config-data" (OuterVolumeSpecName: "config-data") pod "717fd193-b548-41c7-bea0-b43fb73e3535" (UID: "717fd193-b548-41c7-bea0-b43fb73e3535"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.275096 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.275125 4898 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.275134 4898 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.275143 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.275152 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/717fd193-b548-41c7-bea0-b43fb73e3535-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.275159 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85s28\" (UniqueName: \"kubernetes.io/projected/717fd193-b548-41c7-bea0-b43fb73e3535-kube-api-access-85s28\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.456530 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n4zj8" event={"ID":"717fd193-b548-41c7-bea0-b43fb73e3535","Type":"ContainerDied","Data":"0fa3650d216a4890132d8f5c77ba4410c00c083fa4a9b937c6b18b0bb271ebee"} Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.456571 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fa3650d216a4890132d8f5c77ba4410c00c083fa4a9b937c6b18b0bb271ebee" Jan 20 04:06:27 crc kubenswrapper[4898]: I0120 04:06:27.456606 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n4zj8" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.072646 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-n4zj8"] Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.080107 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-n4zj8"] Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.169132 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6n7z5"] Jan 20 04:06:28 crc kubenswrapper[4898]: E0120 04:06:28.169663 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="717fd193-b548-41c7-bea0-b43fb73e3535" containerName="keystone-bootstrap" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.169683 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="717fd193-b548-41c7-bea0-b43fb73e3535" containerName="keystone-bootstrap" Jan 20 04:06:28 crc kubenswrapper[4898]: E0120 04:06:28.169700 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9776970f-ee91-4d2c-ab58-2736f541c2f0" containerName="init" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.169708 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="9776970f-ee91-4d2c-ab58-2736f541c2f0" containerName="init" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.169883 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="717fd193-b548-41c7-bea0-b43fb73e3535" containerName="keystone-bootstrap" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.169903 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="9776970f-ee91-4d2c-ab58-2736f541c2f0" containerName="init" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.173473 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.175538 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.175924 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.176615 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.179251 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t7cgs" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.179264 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.181921 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6n7z5"] Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.293655 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpwgx\" (UniqueName: \"kubernetes.io/projected/7306e81d-2494-41f7-84ba-e23d15cf73c5-kube-api-access-dpwgx\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.293707 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-combined-ca-bundle\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.294521 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-config-data\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.294619 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-scripts\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.294712 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-fernet-keys\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.294809 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-credential-keys\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.397076 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-config-data\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.397182 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-scripts\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.397205 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-fernet-keys\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.397262 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-credential-keys\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.397305 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpwgx\" (UniqueName: \"kubernetes.io/projected/7306e81d-2494-41f7-84ba-e23d15cf73c5-kube-api-access-dpwgx\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.397324 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-combined-ca-bundle\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.403521 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-scripts\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.404160 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-combined-ca-bundle\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.405565 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-fernet-keys\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.405930 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-config-data\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.406890 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-credential-keys\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.416839 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpwgx\" (UniqueName: \"kubernetes.io/projected/7306e81d-2494-41f7-84ba-e23d15cf73c5-kube-api-access-dpwgx\") pod \"keystone-bootstrap-6n7z5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: I0120 04:06:28.502444 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:28 crc kubenswrapper[4898]: E0120 04:06:28.740674 4898 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155 is running failed: container process not found" containerID="2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 04:06:28 crc kubenswrapper[4898]: E0120 04:06:28.742497 4898 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155 is running failed: container process not found" containerID="2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 04:06:28 crc kubenswrapper[4898]: E0120 04:06:28.743221 4898 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155 is running failed: container process not found" containerID="2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 04:06:28 crc kubenswrapper[4898]: E0120 04:06:28.743305 4898 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-tq8nx" podUID="c4018f65-d2f8-4e71-a417-1310469128c6" containerName="registry-server" Jan 20 04:06:29 crc kubenswrapper[4898]: I0120 04:06:29.736501 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="717fd193-b548-41c7-bea0-b43fb73e3535" path="/var/lib/kubelet/pods/717fd193-b548-41c7-bea0-b43fb73e3535/volumes" Jan 20 04:06:30 crc kubenswrapper[4898]: I0120 04:06:30.315885 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" podUID="393c968e-aaea-4b5f-86ba-44ffa221b98e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 20 04:06:36 crc kubenswrapper[4898]: E0120 04:06:35.445551 4898 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 20 04:06:36 crc kubenswrapper[4898]: E0120 04:06:35.446455 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nxgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-rdd4p_openstack(ad38dd1a-677c-4db0-b349-684b1ca42820): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 04:06:36 crc kubenswrapper[4898]: E0120 04:06:35.447648 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-rdd4p" podUID="ad38dd1a-677c-4db0-b349-684b1ca42820" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.503267 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.509324 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.555870 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"014f3c64-ad47-4324-9cb9-2212e8ea2dc0","Type":"ContainerDied","Data":"98e590d57b6ce4f34dfba30c67ac70c6c61ccd0a477899d67f1f9f263941f840"} Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.556230 4898 scope.go:117] "RemoveContainer" containerID="486b70813865b28b59b24adf61b4b7d703873fb5d17c9a5c421f104dfe1448a3" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.556385 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.561558 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9355d0b-874f-4b9d-a48c-786180c6c94b","Type":"ContainerDied","Data":"344e5738c5ef4286515ea4cb700eb829a7eb5f596422bb96bcc6e2d96d028c55"} Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.561638 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: E0120 04:06:35.563222 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-rdd4p" podUID="ad38dd1a-677c-4db0-b349-684b1ca42820" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642174 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-logs\") pod \"f9355d0b-874f-4b9d-a48c-786180c6c94b\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642222 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-logs\") pod \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642256 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-httpd-run\") pod \"f9355d0b-874f-4b9d-a48c-786180c6c94b\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642286 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-public-tls-certs\") pod \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642308 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-config-data\") pod \"f9355d0b-874f-4b9d-a48c-786180c6c94b\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642323 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642421 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-config-data\") pod \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642460 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-scripts\") pod \"f9355d0b-874f-4b9d-a48c-786180c6c94b\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642502 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k79t\" (UniqueName: \"kubernetes.io/projected/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-kube-api-access-2k79t\") pod \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642525 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-combined-ca-bundle\") pod \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642554 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-httpd-run\") pod \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642582 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f9355d0b-874f-4b9d-a48c-786180c6c94b\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642602 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-internal-tls-certs\") pod \"f9355d0b-874f-4b9d-a48c-786180c6c94b\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642639 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-combined-ca-bundle\") pod \"f9355d0b-874f-4b9d-a48c-786180c6c94b\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642658 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc62l\" (UniqueName: \"kubernetes.io/projected/f9355d0b-874f-4b9d-a48c-786180c6c94b-kube-api-access-gc62l\") pod \"f9355d0b-874f-4b9d-a48c-786180c6c94b\" (UID: \"f9355d0b-874f-4b9d-a48c-786180c6c94b\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.642681 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-scripts\") pod \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\" (UID: \"014f3c64-ad47-4324-9cb9-2212e8ea2dc0\") " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.644573 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-logs" (OuterVolumeSpecName: "logs") pod "014f3c64-ad47-4324-9cb9-2212e8ea2dc0" (UID: "014f3c64-ad47-4324-9cb9-2212e8ea2dc0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.644879 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-logs" (OuterVolumeSpecName: "logs") pod "f9355d0b-874f-4b9d-a48c-786180c6c94b" (UID: "f9355d0b-874f-4b9d-a48c-786180c6c94b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.644954 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f9355d0b-874f-4b9d-a48c-786180c6c94b" (UID: "f9355d0b-874f-4b9d-a48c-786180c6c94b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.645122 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "014f3c64-ad47-4324-9cb9-2212e8ea2dc0" (UID: "014f3c64-ad47-4324-9cb9-2212e8ea2dc0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.650239 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-kube-api-access-2k79t" (OuterVolumeSpecName: "kube-api-access-2k79t") pod "014f3c64-ad47-4324-9cb9-2212e8ea2dc0" (UID: "014f3c64-ad47-4324-9cb9-2212e8ea2dc0"). InnerVolumeSpecName "kube-api-access-2k79t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.651712 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-scripts" (OuterVolumeSpecName: "scripts") pod "f9355d0b-874f-4b9d-a48c-786180c6c94b" (UID: "f9355d0b-874f-4b9d-a48c-786180c6c94b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.652178 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "014f3c64-ad47-4324-9cb9-2212e8ea2dc0" (UID: "014f3c64-ad47-4324-9cb9-2212e8ea2dc0"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.654861 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "f9355d0b-874f-4b9d-a48c-786180c6c94b" (UID: "f9355d0b-874f-4b9d-a48c-786180c6c94b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.655841 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-scripts" (OuterVolumeSpecName: "scripts") pod "014f3c64-ad47-4324-9cb9-2212e8ea2dc0" (UID: "014f3c64-ad47-4324-9cb9-2212e8ea2dc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.671846 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9355d0b-874f-4b9d-a48c-786180c6c94b-kube-api-access-gc62l" (OuterVolumeSpecName: "kube-api-access-gc62l") pod "f9355d0b-874f-4b9d-a48c-786180c6c94b" (UID: "f9355d0b-874f-4b9d-a48c-786180c6c94b"). InnerVolumeSpecName "kube-api-access-gc62l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.681742 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9355d0b-874f-4b9d-a48c-786180c6c94b" (UID: "f9355d0b-874f-4b9d-a48c-786180c6c94b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.688401 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "014f3c64-ad47-4324-9cb9-2212e8ea2dc0" (UID: "014f3c64-ad47-4324-9cb9-2212e8ea2dc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.707876 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-config-data" (OuterVolumeSpecName: "config-data") pod "014f3c64-ad47-4324-9cb9-2212e8ea2dc0" (UID: "014f3c64-ad47-4324-9cb9-2212e8ea2dc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.712633 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-config-data" (OuterVolumeSpecName: "config-data") pod "f9355d0b-874f-4b9d-a48c-786180c6c94b" (UID: "f9355d0b-874f-4b9d-a48c-786180c6c94b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.728739 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f9355d0b-874f-4b9d-a48c-786180c6c94b" (UID: "f9355d0b-874f-4b9d-a48c-786180c6c94b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744308 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k79t\" (UniqueName: \"kubernetes.io/projected/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-kube-api-access-2k79t\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744336 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744345 4898 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744361 4898 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744372 4898 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744380 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744389 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc62l\" (UniqueName: \"kubernetes.io/projected/f9355d0b-874f-4b9d-a48c-786180c6c94b-kube-api-access-gc62l\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744399 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744407 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744415 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744423 4898 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9355d0b-874f-4b9d-a48c-786180c6c94b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744886 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744921 4898 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744934 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.744944 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9355d0b-874f-4b9d-a48c-786180c6c94b-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.752399 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "014f3c64-ad47-4324-9cb9-2212e8ea2dc0" (UID: "014f3c64-ad47-4324-9cb9-2212e8ea2dc0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.764078 4898 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.766191 4898 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.846489 4898 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.846509 4898 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/014f3c64-ad47-4324-9cb9-2212e8ea2dc0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.846519 4898 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.896968 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.907679 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.919148 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:35.931269 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.006941 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:06:36 crc kubenswrapper[4898]: E0120 04:06:36.007578 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014f3c64-ad47-4324-9cb9-2212e8ea2dc0" containerName="glance-httpd" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.007630 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="014f3c64-ad47-4324-9cb9-2212e8ea2dc0" containerName="glance-httpd" Jan 20 04:06:36 crc kubenswrapper[4898]: E0120 04:06:36.007648 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9355d0b-874f-4b9d-a48c-786180c6c94b" containerName="glance-log" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.007654 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9355d0b-874f-4b9d-a48c-786180c6c94b" containerName="glance-log" Jan 20 04:06:36 crc kubenswrapper[4898]: E0120 04:06:36.007667 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9355d0b-874f-4b9d-a48c-786180c6c94b" containerName="glance-httpd" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.007699 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9355d0b-874f-4b9d-a48c-786180c6c94b" containerName="glance-httpd" Jan 20 04:06:36 crc kubenswrapper[4898]: E0120 04:06:36.007713 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014f3c64-ad47-4324-9cb9-2212e8ea2dc0" containerName="glance-log" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.007718 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="014f3c64-ad47-4324-9cb9-2212e8ea2dc0" containerName="glance-log" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.008006 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9355d0b-874f-4b9d-a48c-786180c6c94b" containerName="glance-httpd" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.008018 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9355d0b-874f-4b9d-a48c-786180c6c94b" containerName="glance-log" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.008037 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="014f3c64-ad47-4324-9cb9-2212e8ea2dc0" containerName="glance-httpd" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.008046 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="014f3c64-ad47-4324-9cb9-2212e8ea2dc0" containerName="glance-log" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.012053 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.020829 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.021793 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.021908 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-47npl" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.021992 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.025267 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.026803 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.028498 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.028562 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.036386 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.051035 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154417 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154474 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154508 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154529 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmgqm\" (UniqueName: \"kubernetes.io/projected/58b0e573-251e-4529-9dad-55b94fbf1570-kube-api-access-zmgqm\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154547 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154565 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154597 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2stx\" (UniqueName: \"kubernetes.io/projected/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-kube-api-access-l2stx\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154638 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154662 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-logs\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154689 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-logs\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154708 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154737 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154763 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154778 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154797 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.154814 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.256736 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.256798 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.256817 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.256870 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.256890 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.256935 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.256956 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmgqm\" (UniqueName: \"kubernetes.io/projected/58b0e573-251e-4529-9dad-55b94fbf1570-kube-api-access-zmgqm\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.256975 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.257012 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.257051 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2stx\" (UniqueName: \"kubernetes.io/projected/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-kube-api-access-l2stx\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.257089 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.257110 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-logs\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.257150 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-logs\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.257170 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.257198 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.257225 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.258363 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.258547 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.258618 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.258373 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-logs\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.259491 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.259602 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-logs\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.261641 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.263723 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.264566 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.280807 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.281451 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.281456 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.282849 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.282973 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.286332 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2stx\" (UniqueName: \"kubernetes.io/projected/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-kube-api-access-l2stx\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.293411 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.304312 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmgqm\" (UniqueName: \"kubernetes.io/projected/58b0e573-251e-4529-9dad-55b94fbf1570-kube-api-access-zmgqm\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.325993 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.398957 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.407001 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:36 crc kubenswrapper[4898]: E0120 04:06:36.944262 4898 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 20 04:06:36 crc kubenswrapper[4898]: E0120 04:06:36.944462 4898 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8b58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9ws79_openstack(9e533f97-e194-486f-9125-b29cf19e6648): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 04:06:36 crc kubenswrapper[4898]: E0120 04:06:36.945629 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9ws79" podUID="9e533f97-e194-486f-9125-b29cf19e6648" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.960878 4898 scope.go:117] "RemoveContainer" containerID="a68444f46883d76c5e4c8517c9afcf6b211d7cd5b8a79a269bb343e5f94f87a7" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.973270 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.977910 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:06:36 crc kubenswrapper[4898]: I0120 04:06:36.998898 4898 scope.go:117] "RemoveContainer" containerID="732c5ee194da6712665164211e61a7a8d3daaa1fa46b43da78ed25b06de5380a" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.070989 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-nb\") pod \"393c968e-aaea-4b5f-86ba-44ffa221b98e\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.071071 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-catalog-content\") pod \"c4018f65-d2f8-4e71-a417-1310469128c6\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.071089 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-swift-storage-0\") pod \"393c968e-aaea-4b5f-86ba-44ffa221b98e\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.071134 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-sb\") pod \"393c968e-aaea-4b5f-86ba-44ffa221b98e\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.071182 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zktv\" (UniqueName: \"kubernetes.io/projected/393c968e-aaea-4b5f-86ba-44ffa221b98e-kube-api-access-4zktv\") pod \"393c968e-aaea-4b5f-86ba-44ffa221b98e\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.071224 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgfmn\" (UniqueName: \"kubernetes.io/projected/c4018f65-d2f8-4e71-a417-1310469128c6-kube-api-access-mgfmn\") pod \"c4018f65-d2f8-4e71-a417-1310469128c6\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.071286 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-svc\") pod \"393c968e-aaea-4b5f-86ba-44ffa221b98e\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.071350 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-utilities\") pod \"c4018f65-d2f8-4e71-a417-1310469128c6\" (UID: \"c4018f65-d2f8-4e71-a417-1310469128c6\") " Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.071376 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-config\") pod \"393c968e-aaea-4b5f-86ba-44ffa221b98e\" (UID: \"393c968e-aaea-4b5f-86ba-44ffa221b98e\") " Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.076535 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-utilities" (OuterVolumeSpecName: "utilities") pod "c4018f65-d2f8-4e71-a417-1310469128c6" (UID: "c4018f65-d2f8-4e71-a417-1310469128c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.082411 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/393c968e-aaea-4b5f-86ba-44ffa221b98e-kube-api-access-4zktv" (OuterVolumeSpecName: "kube-api-access-4zktv") pod "393c968e-aaea-4b5f-86ba-44ffa221b98e" (UID: "393c968e-aaea-4b5f-86ba-44ffa221b98e"). InnerVolumeSpecName "kube-api-access-4zktv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.090305 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4018f65-d2f8-4e71-a417-1310469128c6-kube-api-access-mgfmn" (OuterVolumeSpecName: "kube-api-access-mgfmn") pod "c4018f65-d2f8-4e71-a417-1310469128c6" (UID: "c4018f65-d2f8-4e71-a417-1310469128c6"). InnerVolumeSpecName "kube-api-access-mgfmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.130900 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "393c968e-aaea-4b5f-86ba-44ffa221b98e" (UID: "393c968e-aaea-4b5f-86ba-44ffa221b98e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.157926 4898 scope.go:117] "RemoveContainer" containerID="7262cd00442d615cbb3007b4588ea7855500339fc23e751f0d42a63b49f30456" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.161333 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "393c968e-aaea-4b5f-86ba-44ffa221b98e" (UID: "393c968e-aaea-4b5f-86ba-44ffa221b98e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.165006 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "393c968e-aaea-4b5f-86ba-44ffa221b98e" (UID: "393c968e-aaea-4b5f-86ba-44ffa221b98e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.177641 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.177663 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.177672 4898 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.177683 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.177691 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zktv\" (UniqueName: \"kubernetes.io/projected/393c968e-aaea-4b5f-86ba-44ffa221b98e-kube-api-access-4zktv\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.177700 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgfmn\" (UniqueName: \"kubernetes.io/projected/c4018f65-d2f8-4e71-a417-1310469128c6-kube-api-access-mgfmn\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.183908 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-config" (OuterVolumeSpecName: "config") pod "393c968e-aaea-4b5f-86ba-44ffa221b98e" (UID: "393c968e-aaea-4b5f-86ba-44ffa221b98e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.193589 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "393c968e-aaea-4b5f-86ba-44ffa221b98e" (UID: "393c968e-aaea-4b5f-86ba-44ffa221b98e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.211281 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4018f65-d2f8-4e71-a417-1310469128c6" (UID: "c4018f65-d2f8-4e71-a417-1310469128c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.279770 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.279801 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/393c968e-aaea-4b5f-86ba-44ffa221b98e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.279812 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4018f65-d2f8-4e71-a417-1310469128c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.473984 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6n7z5"] Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.582086 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vvf7g" event={"ID":"f911b103-17f6-4caf-86f3-56f70295a884","Type":"ContainerStarted","Data":"63507c8ac0e818979463b5d432f5b3ea8fd400fc403ca69d8c4c1086bb1e59ed"} Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.587491 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfvjb" event={"ID":"456251ef-04f7-4abb-bbc3-2270ce7c33a6","Type":"ContainerStarted","Data":"12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181"} Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.590864 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq8nx" event={"ID":"c4018f65-d2f8-4e71-a417-1310469128c6","Type":"ContainerDied","Data":"dfb4a0c3c5c0f785e8903ff642420bc5c48e82099790716ede21938d1f1fcf4a"} Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.590949 4898 scope.go:117] "RemoveContainer" containerID="2d8d81c8bda801989cd8124f855da3616947ddf96ccd30fd5a6ced22e07c6155" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.591069 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq8nx" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.609849 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-vvf7g" podStartSLOduration=3.02490393 podStartE2EDuration="28.609834432s" podCreationTimestamp="2026-01-20 04:06:09 +0000 UTC" firstStartedPulling="2026-01-20 04:06:11.3206716 +0000 UTC m=+1017.920459459" lastFinishedPulling="2026-01-20 04:06:36.905602102 +0000 UTC m=+1043.505389961" observedRunningTime="2026-01-20 04:06:37.603579245 +0000 UTC m=+1044.203367124" watchObservedRunningTime="2026-01-20 04:06:37.609834432 +0000 UTC m=+1044.209622291" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.612658 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bwv29" event={"ID":"2333e11a-59ac-4a16-914a-e846f5fa04d7","Type":"ContainerStarted","Data":"a8a9330ea287a8588b67dcfe381ddd7fbb8e2611ee7ef03ae0a67a1b78ae371c"} Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.637200 4898 scope.go:117] "RemoveContainer" containerID="0b6d1f3e7ba31f075c1eb1ea3443fa265601df6a008177990660e40afe4e199b" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.663055 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pfvjb" podStartSLOduration=3.839957393 podStartE2EDuration="28.663038765s" podCreationTimestamp="2026-01-20 04:06:09 +0000 UTC" firstStartedPulling="2026-01-20 04:06:12.122646763 +0000 UTC m=+1018.722434622" lastFinishedPulling="2026-01-20 04:06:36.945728135 +0000 UTC m=+1043.545515994" observedRunningTime="2026-01-20 04:06:37.660153934 +0000 UTC m=+1044.259941793" watchObservedRunningTime="2026-01-20 04:06:37.663038765 +0000 UTC m=+1044.262826624" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.672065 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" event={"ID":"393c968e-aaea-4b5f-86ba-44ffa221b98e","Type":"ContainerDied","Data":"3d170c4930811f750bfcde24f39f24e61170429d5cd542004f9071d09e7f627f"} Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.672156 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.708794 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tq8nx"] Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.715730 4898 scope.go:117] "RemoveContainer" containerID="c7f0062b1a548f88891f71f53586fc908780348ecf554f828ba040b2997da8f5" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.737937 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-bwv29" podStartSLOduration=3.360784083 podStartE2EDuration="28.73791993s" podCreationTimestamp="2026-01-20 04:06:09 +0000 UTC" firstStartedPulling="2026-01-20 04:06:11.538023116 +0000 UTC m=+1018.137810975" lastFinishedPulling="2026-01-20 04:06:36.915158963 +0000 UTC m=+1043.514946822" observedRunningTime="2026-01-20 04:06:37.691971455 +0000 UTC m=+1044.291759314" watchObservedRunningTime="2026-01-20 04:06:37.73791993 +0000 UTC m=+1044.337707789" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.742179 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="014f3c64-ad47-4324-9cb9-2212e8ea2dc0" path="/var/lib/kubelet/pods/014f3c64-ad47-4324-9cb9-2212e8ea2dc0/volumes" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.743203 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9355d0b-874f-4b9d-a48c-786180c6c94b" path="/var/lib/kubelet/pods/f9355d0b-874f-4b9d-a48c-786180c6c94b/volumes" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.743849 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tq8nx"] Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.743927 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"635aa30f-d711-43d2-906b-b951f5c6a9ad","Type":"ContainerStarted","Data":"f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489"} Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.792039 4898 scope.go:117] "RemoveContainer" containerID="1b813b2d842ece85416dbd0c831e6e8ddc1cc59432f832b3735b9790d5eaf388" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.793248 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6n7z5" event={"ID":"7306e81d-2494-41f7-84ba-e23d15cf73c5","Type":"ContainerStarted","Data":"44c129e565e5adf4576f3c64ed01f425e1904165552064a4a512a8166bded68c"} Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.822967 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w46nx"] Jan 20 04:06:37 crc kubenswrapper[4898]: E0120 04:06:37.826874 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-9ws79" podUID="9e533f97-e194-486f-9125-b29cf19e6648" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.847824 4898 scope.go:117] "RemoveContainer" containerID="a585b3b807ff6eba355744114e6d293776cdc0077a2f03c80a1f77065a7d1812" Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.848373 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w46nx"] Jan 20 04:06:37 crc kubenswrapper[4898]: I0120 04:06:37.865504 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:06:38 crc kubenswrapper[4898]: I0120 04:06:38.319226 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:06:38 crc kubenswrapper[4898]: W0120 04:06:38.330685 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58b0e573_251e_4529_9dad_55b94fbf1570.slice/crio-cd3a3904edfe0bd20b4c8f66d52fa22ac06be4f509831b7c9d69dfb8a95c4e6d WatchSource:0}: Error finding container cd3a3904edfe0bd20b4c8f66d52fa22ac06be4f509831b7c9d69dfb8a95c4e6d: Status 404 returned error can't find the container with id cd3a3904edfe0bd20b4c8f66d52fa22ac06be4f509831b7c9d69dfb8a95c4e6d Jan 20 04:06:38 crc kubenswrapper[4898]: I0120 04:06:38.817011 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0","Type":"ContainerStarted","Data":"9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1"} Jan 20 04:06:38 crc kubenswrapper[4898]: I0120 04:06:38.817347 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0","Type":"ContainerStarted","Data":"4f2a9810b4851adfb4fab3c526794f33902e5327eb6591c2d150e8fe40882587"} Jan 20 04:06:38 crc kubenswrapper[4898]: I0120 04:06:38.818798 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6n7z5" event={"ID":"7306e81d-2494-41f7-84ba-e23d15cf73c5","Type":"ContainerStarted","Data":"7ccf5158b9f0b35d5866a559634a442e76aa1029820a156eb53f58996c661d60"} Jan 20 04:06:38 crc kubenswrapper[4898]: I0120 04:06:38.825069 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58b0e573-251e-4529-9dad-55b94fbf1570","Type":"ContainerStarted","Data":"cd3a3904edfe0bd20b4c8f66d52fa22ac06be4f509831b7c9d69dfb8a95c4e6d"} Jan 20 04:06:38 crc kubenswrapper[4898]: I0120 04:06:38.852488 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6n7z5" podStartSLOduration=10.852460064 podStartE2EDuration="10.852460064s" podCreationTimestamp="2026-01-20 04:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:38.84631996 +0000 UTC m=+1045.446107819" watchObservedRunningTime="2026-01-20 04:06:38.852460064 +0000 UTC m=+1045.452247923" Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.736015 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="393c968e-aaea-4b5f-86ba-44ffa221b98e" path="/var/lib/kubelet/pods/393c968e-aaea-4b5f-86ba-44ffa221b98e/volumes" Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.738120 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4018f65-d2f8-4e71-a417-1310469128c6" path="/var/lib/kubelet/pods/c4018f65-d2f8-4e71-a417-1310469128c6/volumes" Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.839810 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0","Type":"ContainerStarted","Data":"5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946"} Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.842536 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"635aa30f-d711-43d2-906b-b951f5c6a9ad","Type":"ContainerStarted","Data":"6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d"} Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.849788 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58b0e573-251e-4529-9dad-55b94fbf1570","Type":"ContainerStarted","Data":"3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64"} Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.849844 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58b0e573-251e-4529-9dad-55b94fbf1570","Type":"ContainerStarted","Data":"32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf"} Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.865240 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.865280 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.879972 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.879945609 podStartE2EDuration="4.879945609s" podCreationTimestamp="2026-01-20 04:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:39.871604037 +0000 UTC m=+1046.471392006" watchObservedRunningTime="2026-01-20 04:06:39.879945609 +0000 UTC m=+1046.479733468" Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.901579 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.901531998 podStartE2EDuration="4.901531998s" podCreationTimestamp="2026-01-20 04:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:39.899213265 +0000 UTC m=+1046.499001124" watchObservedRunningTime="2026-01-20 04:06:39.901531998 +0000 UTC m=+1046.501319857" Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.975768 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.976188 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.976247 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.977060 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e32845009c9a4455856ea9d28c879c6a89bdd3e67ca3cb7f9d9c98a468886eaa"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 04:06:39 crc kubenswrapper[4898]: I0120 04:06:39.977122 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://e32845009c9a4455856ea9d28c879c6a89bdd3e67ca3cb7f9d9c98a468886eaa" gracePeriod=600 Jan 20 04:06:40 crc kubenswrapper[4898]: I0120 04:06:40.316553 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-w46nx" podUID="393c968e-aaea-4b5f-86ba-44ffa221b98e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Jan 20 04:06:40 crc kubenswrapper[4898]: I0120 04:06:40.887891 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vvf7g" event={"ID":"f911b103-17f6-4caf-86f3-56f70295a884","Type":"ContainerDied","Data":"63507c8ac0e818979463b5d432f5b3ea8fd400fc403ca69d8c4c1086bb1e59ed"} Jan 20 04:06:40 crc kubenswrapper[4898]: I0120 04:06:40.887938 4898 generic.go:334] "Generic (PLEG): container finished" podID="f911b103-17f6-4caf-86f3-56f70295a884" containerID="63507c8ac0e818979463b5d432f5b3ea8fd400fc403ca69d8c4c1086bb1e59ed" exitCode=0 Jan 20 04:06:40 crc kubenswrapper[4898]: I0120 04:06:40.893394 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="e32845009c9a4455856ea9d28c879c6a89bdd3e67ca3cb7f9d9c98a468886eaa" exitCode=0 Jan 20 04:06:40 crc kubenswrapper[4898]: I0120 04:06:40.893460 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"e32845009c9a4455856ea9d28c879c6a89bdd3e67ca3cb7f9d9c98a468886eaa"} Jan 20 04:06:40 crc kubenswrapper[4898]: I0120 04:06:40.893509 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"1b7a7c76c50f5c70766a45b97b50c613bd67cf8335b24e271a6d0ca6195cc7af"} Jan 20 04:06:40 crc kubenswrapper[4898]: I0120 04:06:40.893529 4898 scope.go:117] "RemoveContainer" containerID="4ba5e3101afcfdc26fb12699a1870157681fb7b78baca1e12fdaf156b52381e1" Jan 20 04:06:40 crc kubenswrapper[4898]: I0120 04:06:40.927881 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pfvjb" podUID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerName="registry-server" probeResult="failure" output=< Jan 20 04:06:40 crc kubenswrapper[4898]: timeout: failed to connect service ":50051" within 1s Jan 20 04:06:40 crc kubenswrapper[4898]: > Jan 20 04:06:41 crc kubenswrapper[4898]: I0120 04:06:41.905175 4898 generic.go:334] "Generic (PLEG): container finished" podID="7306e81d-2494-41f7-84ba-e23d15cf73c5" containerID="7ccf5158b9f0b35d5866a559634a442e76aa1029820a156eb53f58996c661d60" exitCode=0 Jan 20 04:06:41 crc kubenswrapper[4898]: I0120 04:06:41.905521 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6n7z5" event={"ID":"7306e81d-2494-41f7-84ba-e23d15cf73c5","Type":"ContainerDied","Data":"7ccf5158b9f0b35d5866a559634a442e76aa1029820a156eb53f58996c661d60"} Jan 20 04:06:42 crc kubenswrapper[4898]: I0120 04:06:42.924508 4898 generic.go:334] "Generic (PLEG): container finished" podID="2333e11a-59ac-4a16-914a-e846f5fa04d7" containerID="a8a9330ea287a8588b67dcfe381ddd7fbb8e2611ee7ef03ae0a67a1b78ae371c" exitCode=0 Jan 20 04:06:42 crc kubenswrapper[4898]: I0120 04:06:42.924582 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bwv29" event={"ID":"2333e11a-59ac-4a16-914a-e846f5fa04d7","Type":"ContainerDied","Data":"a8a9330ea287a8588b67dcfe381ddd7fbb8e2611ee7ef03ae0a67a1b78ae371c"} Jan 20 04:06:43 crc kubenswrapper[4898]: I0120 04:06:43.936135 4898 generic.go:334] "Generic (PLEG): container finished" podID="15b219ed-e32a-4f8c-b3f7-2282e6fddcb3" containerID="bd0b80f057185241fc9b6a60514cadf528712a69597d8a45d6056e73dfbd2024" exitCode=0 Jan 20 04:06:43 crc kubenswrapper[4898]: I0120 04:06:43.936298 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sppjk" event={"ID":"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3","Type":"ContainerDied","Data":"bd0b80f057185241fc9b6a60514cadf528712a69597d8a45d6056e73dfbd2024"} Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.399418 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.399695 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.407455 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.407492 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.430523 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.440253 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.444557 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.473961 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.975536 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sppjk" event={"ID":"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3","Type":"ContainerDied","Data":"2f0667b874e0bc766b0a44fc29108ce9ce1569350f5dccbb0dbc9f328493abb7"} Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.975913 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f0667b874e0bc766b0a44fc29108ce9ce1569350f5dccbb0dbc9f328493abb7" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.978200 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bwv29" event={"ID":"2333e11a-59ac-4a16-914a-e846f5fa04d7","Type":"ContainerDied","Data":"8cdf2cde048fd3d9e3e28eb862d0a4d8aa9444735c2b6eb0f5de366075dee41a"} Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.978237 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cdf2cde048fd3d9e3e28eb862d0a4d8aa9444735c2b6eb0f5de366075dee41a" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.979982 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6n7z5" event={"ID":"7306e81d-2494-41f7-84ba-e23d15cf73c5","Type":"ContainerDied","Data":"44c129e565e5adf4576f3c64ed01f425e1904165552064a4a512a8166bded68c"} Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.980011 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44c129e565e5adf4576f3c64ed01f425e1904165552064a4a512a8166bded68c" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.981970 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vvf7g" event={"ID":"f911b103-17f6-4caf-86f3-56f70295a884","Type":"ContainerDied","Data":"d0a1a1fa9ea516e3f348f4bc97edf285bc61414eb8a12c6816b1da4b42c71e64"} Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.981994 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a1a1fa9ea516e3f348f4bc97edf285bc61414eb8a12c6816b1da4b42c71e64" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.982334 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.982396 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.982414 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 20 04:06:46 crc kubenswrapper[4898]: I0120 04:06:46.982444 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.092648 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.116984 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.149153 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.149983 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.189768 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpwgx\" (UniqueName: \"kubernetes.io/projected/7306e81d-2494-41f7-84ba-e23d15cf73c5-kube-api-access-dpwgx\") pod \"7306e81d-2494-41f7-84ba-e23d15cf73c5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.190176 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-scripts\") pod \"7306e81d-2494-41f7-84ba-e23d15cf73c5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.190304 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-credential-keys\") pod \"7306e81d-2494-41f7-84ba-e23d15cf73c5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.190396 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-fernet-keys\") pod \"7306e81d-2494-41f7-84ba-e23d15cf73c5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.190476 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-combined-ca-bundle\") pod \"7306e81d-2494-41f7-84ba-e23d15cf73c5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.190618 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-config-data\") pod \"7306e81d-2494-41f7-84ba-e23d15cf73c5\" (UID: \"7306e81d-2494-41f7-84ba-e23d15cf73c5\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.196699 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7306e81d-2494-41f7-84ba-e23d15cf73c5" (UID: "7306e81d-2494-41f7-84ba-e23d15cf73c5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.204598 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-scripts" (OuterVolumeSpecName: "scripts") pod "7306e81d-2494-41f7-84ba-e23d15cf73c5" (UID: "7306e81d-2494-41f7-84ba-e23d15cf73c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.241707 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7306e81d-2494-41f7-84ba-e23d15cf73c5" (UID: "7306e81d-2494-41f7-84ba-e23d15cf73c5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.247589 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7306e81d-2494-41f7-84ba-e23d15cf73c5-kube-api-access-dpwgx" (OuterVolumeSpecName: "kube-api-access-dpwgx") pod "7306e81d-2494-41f7-84ba-e23d15cf73c5" (UID: "7306e81d-2494-41f7-84ba-e23d15cf73c5"). InnerVolumeSpecName "kube-api-access-dpwgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.249242 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-config-data" (OuterVolumeSpecName: "config-data") pod "7306e81d-2494-41f7-84ba-e23d15cf73c5" (UID: "7306e81d-2494-41f7-84ba-e23d15cf73c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.256959 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7306e81d-2494-41f7-84ba-e23d15cf73c5" (UID: "7306e81d-2494-41f7-84ba-e23d15cf73c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.293022 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dpcg\" (UniqueName: \"kubernetes.io/projected/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-kube-api-access-5dpcg\") pod \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.293069 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-combined-ca-bundle\") pod \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.293148 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-combined-ca-bundle\") pod \"2333e11a-59ac-4a16-914a-e846f5fa04d7\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.293177 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-db-sync-config-data\") pod \"2333e11a-59ac-4a16-914a-e846f5fa04d7\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.293197 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-combined-ca-bundle\") pod \"f911b103-17f6-4caf-86f3-56f70295a884\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.293219 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f911b103-17f6-4caf-86f3-56f70295a884-logs\") pod \"f911b103-17f6-4caf-86f3-56f70295a884\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.293278 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-config-data\") pod \"f911b103-17f6-4caf-86f3-56f70295a884\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.293317 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-scripts\") pod \"f911b103-17f6-4caf-86f3-56f70295a884\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.293345 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb4wp\" (UniqueName: \"kubernetes.io/projected/f911b103-17f6-4caf-86f3-56f70295a884-kube-api-access-rb4wp\") pod \"f911b103-17f6-4caf-86f3-56f70295a884\" (UID: \"f911b103-17f6-4caf-86f3-56f70295a884\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.293366 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-config\") pod \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\" (UID: \"15b219ed-e32a-4f8c-b3f7-2282e6fddcb3\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.293389 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p857\" (UniqueName: \"kubernetes.io/projected/2333e11a-59ac-4a16-914a-e846f5fa04d7-kube-api-access-2p857\") pod \"2333e11a-59ac-4a16-914a-e846f5fa04d7\" (UID: \"2333e11a-59ac-4a16-914a-e846f5fa04d7\") " Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.294200 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f911b103-17f6-4caf-86f3-56f70295a884-logs" (OuterVolumeSpecName: "logs") pod "f911b103-17f6-4caf-86f3-56f70295a884" (UID: "f911b103-17f6-4caf-86f3-56f70295a884"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.294754 4898 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.294783 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.294798 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.294813 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpwgx\" (UniqueName: \"kubernetes.io/projected/7306e81d-2494-41f7-84ba-e23d15cf73c5-kube-api-access-dpwgx\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.294825 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.294836 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f911b103-17f6-4caf-86f3-56f70295a884-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.294845 4898 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7306e81d-2494-41f7-84ba-e23d15cf73c5-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.297983 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-kube-api-access-5dpcg" (OuterVolumeSpecName: "kube-api-access-5dpcg") pod "15b219ed-e32a-4f8c-b3f7-2282e6fddcb3" (UID: "15b219ed-e32a-4f8c-b3f7-2282e6fddcb3"). InnerVolumeSpecName "kube-api-access-5dpcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.298410 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f911b103-17f6-4caf-86f3-56f70295a884-kube-api-access-rb4wp" (OuterVolumeSpecName: "kube-api-access-rb4wp") pod "f911b103-17f6-4caf-86f3-56f70295a884" (UID: "f911b103-17f6-4caf-86f3-56f70295a884"). InnerVolumeSpecName "kube-api-access-rb4wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.300051 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-scripts" (OuterVolumeSpecName: "scripts") pod "f911b103-17f6-4caf-86f3-56f70295a884" (UID: "f911b103-17f6-4caf-86f3-56f70295a884"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.301945 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2333e11a-59ac-4a16-914a-e846f5fa04d7-kube-api-access-2p857" (OuterVolumeSpecName: "kube-api-access-2p857") pod "2333e11a-59ac-4a16-914a-e846f5fa04d7" (UID: "2333e11a-59ac-4a16-914a-e846f5fa04d7"). InnerVolumeSpecName "kube-api-access-2p857". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.310382 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2333e11a-59ac-4a16-914a-e846f5fa04d7" (UID: "2333e11a-59ac-4a16-914a-e846f5fa04d7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.317865 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2333e11a-59ac-4a16-914a-e846f5fa04d7" (UID: "2333e11a-59ac-4a16-914a-e846f5fa04d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.322920 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f911b103-17f6-4caf-86f3-56f70295a884" (UID: "f911b103-17f6-4caf-86f3-56f70295a884"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.327793 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15b219ed-e32a-4f8c-b3f7-2282e6fddcb3" (UID: "15b219ed-e32a-4f8c-b3f7-2282e6fddcb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.330776 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-config-data" (OuterVolumeSpecName: "config-data") pod "f911b103-17f6-4caf-86f3-56f70295a884" (UID: "f911b103-17f6-4caf-86f3-56f70295a884"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.335201 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-config" (OuterVolumeSpecName: "config") pod "15b219ed-e32a-4f8c-b3f7-2282e6fddcb3" (UID: "15b219ed-e32a-4f8c-b3f7-2282e6fddcb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.397183 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.397210 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb4wp\" (UniqueName: \"kubernetes.io/projected/f911b103-17f6-4caf-86f3-56f70295a884-kube-api-access-rb4wp\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.397221 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.397230 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p857\" (UniqueName: \"kubernetes.io/projected/2333e11a-59ac-4a16-914a-e846f5fa04d7-kube-api-access-2p857\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.397239 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dpcg\" (UniqueName: \"kubernetes.io/projected/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-kube-api-access-5dpcg\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.397248 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.397256 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.397266 4898 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2333e11a-59ac-4a16-914a-e846f5fa04d7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.397273 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:47 crc kubenswrapper[4898]: I0120 04:06:47.397281 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f911b103-17f6-4caf-86f3-56f70295a884-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.012353 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"635aa30f-d711-43d2-906b-b951f5c6a9ad","Type":"ContainerStarted","Data":"e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27"} Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.012627 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sppjk" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.012679 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6n7z5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.013450 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bwv29" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.013424 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vvf7g" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.213452 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8684dcb884-x94mz"] Jan 20 04:06:48 crc kubenswrapper[4898]: E0120 04:06:48.215358 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4018f65-d2f8-4e71-a417-1310469128c6" containerName="extract-content" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215386 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4018f65-d2f8-4e71-a417-1310469128c6" containerName="extract-content" Jan 20 04:06:48 crc kubenswrapper[4898]: E0120 04:06:48.215398 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4018f65-d2f8-4e71-a417-1310469128c6" containerName="registry-server" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215406 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4018f65-d2f8-4e71-a417-1310469128c6" containerName="registry-server" Jan 20 04:06:48 crc kubenswrapper[4898]: E0120 04:06:48.215417 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393c968e-aaea-4b5f-86ba-44ffa221b98e" containerName="dnsmasq-dns" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215424 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="393c968e-aaea-4b5f-86ba-44ffa221b98e" containerName="dnsmasq-dns" Jan 20 04:06:48 crc kubenswrapper[4898]: E0120 04:06:48.215453 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4018f65-d2f8-4e71-a417-1310469128c6" containerName="extract-utilities" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215462 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4018f65-d2f8-4e71-a417-1310469128c6" containerName="extract-utilities" Jan 20 04:06:48 crc kubenswrapper[4898]: E0120 04:06:48.215477 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b219ed-e32a-4f8c-b3f7-2282e6fddcb3" containerName="neutron-db-sync" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215486 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b219ed-e32a-4f8c-b3f7-2282e6fddcb3" containerName="neutron-db-sync" Jan 20 04:06:48 crc kubenswrapper[4898]: E0120 04:06:48.215503 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f911b103-17f6-4caf-86f3-56f70295a884" containerName="placement-db-sync" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215510 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f911b103-17f6-4caf-86f3-56f70295a884" containerName="placement-db-sync" Jan 20 04:06:48 crc kubenswrapper[4898]: E0120 04:06:48.215521 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393c968e-aaea-4b5f-86ba-44ffa221b98e" containerName="init" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215528 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="393c968e-aaea-4b5f-86ba-44ffa221b98e" containerName="init" Jan 20 04:06:48 crc kubenswrapper[4898]: E0120 04:06:48.215535 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7306e81d-2494-41f7-84ba-e23d15cf73c5" containerName="keystone-bootstrap" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215544 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="7306e81d-2494-41f7-84ba-e23d15cf73c5" containerName="keystone-bootstrap" Jan 20 04:06:48 crc kubenswrapper[4898]: E0120 04:06:48.215560 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2333e11a-59ac-4a16-914a-e846f5fa04d7" containerName="barbican-db-sync" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215567 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2333e11a-59ac-4a16-914a-e846f5fa04d7" containerName="barbican-db-sync" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215755 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f911b103-17f6-4caf-86f3-56f70295a884" containerName="placement-db-sync" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215781 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b219ed-e32a-4f8c-b3f7-2282e6fddcb3" containerName="neutron-db-sync" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215794 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4018f65-d2f8-4e71-a417-1310469128c6" containerName="registry-server" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215809 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="2333e11a-59ac-4a16-914a-e846f5fa04d7" containerName="barbican-db-sync" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215820 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="393c968e-aaea-4b5f-86ba-44ffa221b98e" containerName="dnsmasq-dns" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.215830 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="7306e81d-2494-41f7-84ba-e23d15cf73c5" containerName="keystone-bootstrap" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.216550 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.219711 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.220215 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.220493 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.226156 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.226482 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.232060 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t7cgs" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.242462 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8684dcb884-x94mz"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.317170 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-scripts\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.317254 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82dgg\" (UniqueName: \"kubernetes.io/projected/ad336ef7-2c6d-46c5-b7ae-996366226bc5-kube-api-access-82dgg\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.317300 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-credential-keys\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.317318 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-internal-tls-certs\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.317383 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-combined-ca-bundle\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.317443 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-config-data\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.317465 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-public-tls-certs\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.317511 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-fernet-keys\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.317612 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-659876b84-8djq9"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.319541 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.322802 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.322999 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.322856 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ldtwr" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.322906 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.322960 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.342115 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-659876b84-8djq9"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.421841 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-logs\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.422321 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-config-data\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.422353 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-combined-ca-bundle\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.422376 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-public-tls-certs\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.422401 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-config-data\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.425477 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-fernet-keys\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.432471 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdpw\" (UniqueName: \"kubernetes.io/projected/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-kube-api-access-fsdpw\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.432765 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-scripts\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.432907 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82dgg\" (UniqueName: \"kubernetes.io/projected/ad336ef7-2c6d-46c5-b7ae-996366226bc5-kube-api-access-82dgg\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.433008 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-scripts\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.433109 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-credential-keys\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.433114 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-fernet-keys\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.433193 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-config-data\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.435637 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-internal-tls-certs\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.435844 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-internal-tls-certs\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.435887 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-public-tls-certs\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.435909 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-combined-ca-bundle\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.440330 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-scripts\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.441173 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-public-tls-certs\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.443169 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-combined-ca-bundle\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.443945 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-internal-tls-certs\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.462184 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad336ef7-2c6d-46c5-b7ae-996366226bc5-credential-keys\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.475583 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-646c85868c-g2f2w"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.483223 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.491161 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-txqlm" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.491575 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.491892 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.503933 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82dgg\" (UniqueName: \"kubernetes.io/projected/ad336ef7-2c6d-46c5-b7ae-996366226bc5-kube-api-access-82dgg\") pod \"keystone-8684dcb884-x94mz\" (UID: \"ad336ef7-2c6d-46c5-b7ae-996366226bc5\") " pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.526501 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-646c85868c-g2f2w"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.542397 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsdpw\" (UniqueName: \"kubernetes.io/projected/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-kube-api-access-fsdpw\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.542718 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-scripts\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.542878 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-internal-tls-certs\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.542956 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-public-tls-certs\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.543177 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-logs\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.543264 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-combined-ca-bundle\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.543335 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-config-data\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.544644 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-logs\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.554294 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-combined-ca-bundle\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.555058 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-config-data\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.557394 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.557816 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-scripts\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.560967 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-internal-tls-certs\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.577790 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-public-tls-certs\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.596626 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5b98946766-bzxqb"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.598207 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.600578 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.612895 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsdpw\" (UniqueName: \"kubernetes.io/projected/83dbc10c-d5eb-435e-97b2-3b615b6e4e10-kube-api-access-fsdpw\") pod \"placement-659876b84-8djq9\" (UID: \"83dbc10c-d5eb-435e-97b2-3b615b6e4e10\") " pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.613102 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b98946766-bzxqb"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.629370 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-9zck5"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.632940 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.637524 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-9zck5"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.646424 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85d50ce2-27f1-4ae8-8612-647c1856e03e-config-data-custom\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.646482 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d50ce2-27f1-4ae8-8612-647c1856e03e-config-data\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.646574 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d50ce2-27f1-4ae8-8612-647c1856e03e-logs\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.646596 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d50ce2-27f1-4ae8-8612-647c1856e03e-combined-ca-bundle\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.646620 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flmr7\" (UniqueName: \"kubernetes.io/projected/85d50ce2-27f1-4ae8-8612-647c1856e03e-kube-api-access-flmr7\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.657802 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6766898678-74xkh"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.659424 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.662390 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.663103 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.672711 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-9zck5"] Jan 20 04:06:48 crc kubenswrapper[4898]: E0120 04:06:48.673869 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-768vs ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6b7b667979-9zck5" podUID="fac5c74b-a06e-4752-a352-2d329c307389" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.683099 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6766898678-74xkh"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748465 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-svc\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748505 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85d50ce2-27f1-4ae8-8612-647c1856e03e-config-data-custom\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748536 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data-custom\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748565 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d50ce2-27f1-4ae8-8612-647c1856e03e-config-data\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748589 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748623 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748645 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c82f48-c250-400b-b1b0-00a613cbd1e7-config-data\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748663 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c82f48-c250-400b-b1b0-00a613cbd1e7-combined-ca-bundle\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748679 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87c82f48-c250-400b-b1b0-00a613cbd1e7-config-data-custom\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748704 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-config\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748727 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-768vs\" (UniqueName: \"kubernetes.io/projected/fac5c74b-a06e-4752-a352-2d329c307389-kube-api-access-768vs\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748747 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grdcs\" (UniqueName: \"kubernetes.io/projected/87c82f48-c250-400b-b1b0-00a613cbd1e7-kube-api-access-grdcs\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748763 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed637740-b1d6-464e-9167-010b86294ae0-logs\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748787 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kc8j\" (UniqueName: \"kubernetes.io/projected/ed637740-b1d6-464e-9167-010b86294ae0-kube-api-access-9kc8j\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748811 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748830 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d50ce2-27f1-4ae8-8612-647c1856e03e-logs\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748850 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d50ce2-27f1-4ae8-8612-647c1856e03e-combined-ca-bundle\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748870 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flmr7\" (UniqueName: \"kubernetes.io/projected/85d50ce2-27f1-4ae8-8612-647c1856e03e-kube-api-access-flmr7\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748906 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-combined-ca-bundle\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748923 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c82f48-c250-400b-b1b0-00a613cbd1e7-logs\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.748939 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.752877 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d50ce2-27f1-4ae8-8612-647c1856e03e-logs\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.756105 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85d50ce2-27f1-4ae8-8612-647c1856e03e-config-data-custom\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.760347 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d50ce2-27f1-4ae8-8612-647c1856e03e-config-data\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.764074 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d50ce2-27f1-4ae8-8612-647c1856e03e-combined-ca-bundle\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.776355 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-krtcn"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.776984 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flmr7\" (UniqueName: \"kubernetes.io/projected/85d50ce2-27f1-4ae8-8612-647c1856e03e-kube-api-access-flmr7\") pod \"barbican-worker-646c85868c-g2f2w\" (UID: \"85d50ce2-27f1-4ae8-8612-647c1856e03e\") " pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.777707 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.818784 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8dc9f49b-vg2wk"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.824642 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.835028 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8dc9f49b-vg2wk"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.836958 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.837020 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.837237 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.837418 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ng5wv" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.851863 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c82f48-c250-400b-b1b0-00a613cbd1e7-config-data\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.851921 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c82f48-c250-400b-b1b0-00a613cbd1e7-combined-ca-bundle\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.851945 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87c82f48-c250-400b-b1b0-00a613cbd1e7-config-data-custom\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.851999 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-config\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852034 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-768vs\" (UniqueName: \"kubernetes.io/projected/fac5c74b-a06e-4752-a352-2d329c307389-kube-api-access-768vs\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852078 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grdcs\" (UniqueName: \"kubernetes.io/projected/87c82f48-c250-400b-b1b0-00a613cbd1e7-kube-api-access-grdcs\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852112 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed637740-b1d6-464e-9167-010b86294ae0-logs\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852166 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kc8j\" (UniqueName: \"kubernetes.io/projected/ed637740-b1d6-464e-9167-010b86294ae0-kube-api-access-9kc8j\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852235 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852349 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-combined-ca-bundle\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852388 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c82f48-c250-400b-b1b0-00a613cbd1e7-logs\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852408 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852453 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-svc\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852483 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data-custom\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852538 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.852611 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.853680 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.859193 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c82f48-c250-400b-b1b0-00a613cbd1e7-logs\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.859788 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c82f48-c250-400b-b1b0-00a613cbd1e7-config-data\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.860310 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.861007 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-svc\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.870194 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.872202 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data-custom\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.872598 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.873999 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-config\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.875110 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed637740-b1d6-464e-9167-010b86294ae0-logs\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.880787 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-krtcn"] Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.881733 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87c82f48-c250-400b-b1b0-00a613cbd1e7-config-data-custom\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.893503 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c82f48-c250-400b-b1b0-00a613cbd1e7-combined-ca-bundle\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.899957 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-combined-ca-bundle\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.901141 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-768vs\" (UniqueName: \"kubernetes.io/projected/fac5c74b-a06e-4752-a352-2d329c307389-kube-api-access-768vs\") pod \"dnsmasq-dns-6b7b667979-9zck5\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.904966 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kc8j\" (UniqueName: \"kubernetes.io/projected/ed637740-b1d6-464e-9167-010b86294ae0-kube-api-access-9kc8j\") pod \"barbican-api-6766898678-74xkh\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.924304 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grdcs\" (UniqueName: \"kubernetes.io/projected/87c82f48-c250-400b-b1b0-00a613cbd1e7-kube-api-access-grdcs\") pod \"barbican-keystone-listener-5b98946766-bzxqb\" (UID: \"87c82f48-c250-400b-b1b0-00a613cbd1e7\") " pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.927081 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-646c85868c-g2f2w" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.941553 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.955019 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-config\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.955054 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.955101 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5rrq\" (UniqueName: \"kubernetes.io/projected/1d4ffd01-3225-47f8-a88b-00acb1506664-kube-api-access-h5rrq\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.955122 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtzg\" (UniqueName: \"kubernetes.io/projected/00669966-0f38-45df-9949-d2ee09f6d294-kube-api-access-5xtzg\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.955160 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-config\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.955210 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.955241 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.955264 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-ovndb-tls-certs\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.955345 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.955364 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-httpd-config\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.955395 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-combined-ca-bundle\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:48 crc kubenswrapper[4898]: I0120 04:06:48.996061 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.046228 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.069592 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.069686 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.069714 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-ovndb-tls-certs\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.071046 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.073012 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.074081 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.074140 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-httpd-config\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.074195 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-combined-ca-bundle\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.074287 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-config\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.074311 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.074401 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5rrq\" (UniqueName: \"kubernetes.io/projected/1d4ffd01-3225-47f8-a88b-00acb1506664-kube-api-access-h5rrq\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.074418 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xtzg\" (UniqueName: \"kubernetes.io/projected/00669966-0f38-45df-9949-d2ee09f6d294-kube-api-access-5xtzg\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.075547 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-config\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.076596 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-config\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.077096 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.082339 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.091461 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-combined-ca-bundle\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.098931 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-httpd-config\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.099556 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-ovndb-tls-certs\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.101673 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-config\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.102314 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xtzg\" (UniqueName: \"kubernetes.io/projected/00669966-0f38-45df-9949-d2ee09f6d294-kube-api-access-5xtzg\") pod \"neutron-8dc9f49b-vg2wk\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.104068 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5rrq\" (UniqueName: \"kubernetes.io/projected/1d4ffd01-3225-47f8-a88b-00acb1506664-kube-api-access-h5rrq\") pod \"dnsmasq-dns-848cf88cfc-krtcn\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.104586 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.117598 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.279949 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-svc\") pod \"fac5c74b-a06e-4752-a352-2d329c307389\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.280265 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-config\") pod \"fac5c74b-a06e-4752-a352-2d329c307389\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.280303 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-nb\") pod \"fac5c74b-a06e-4752-a352-2d329c307389\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.280376 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-sb\") pod \"fac5c74b-a06e-4752-a352-2d329c307389\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.280404 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-swift-storage-0\") pod \"fac5c74b-a06e-4752-a352-2d329c307389\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.280454 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-768vs\" (UniqueName: \"kubernetes.io/projected/fac5c74b-a06e-4752-a352-2d329c307389-kube-api-access-768vs\") pod \"fac5c74b-a06e-4752-a352-2d329c307389\" (UID: \"fac5c74b-a06e-4752-a352-2d329c307389\") " Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.283600 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fac5c74b-a06e-4752-a352-2d329c307389" (UID: "fac5c74b-a06e-4752-a352-2d329c307389"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.283953 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fac5c74b-a06e-4752-a352-2d329c307389" (UID: "fac5c74b-a06e-4752-a352-2d329c307389"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.284254 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-config" (OuterVolumeSpecName: "config") pod "fac5c74b-a06e-4752-a352-2d329c307389" (UID: "fac5c74b-a06e-4752-a352-2d329c307389"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.286934 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fac5c74b-a06e-4752-a352-2d329c307389" (UID: "fac5c74b-a06e-4752-a352-2d329c307389"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.287260 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fac5c74b-a06e-4752-a352-2d329c307389" (UID: "fac5c74b-a06e-4752-a352-2d329c307389"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.294142 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac5c74b-a06e-4752-a352-2d329c307389-kube-api-access-768vs" (OuterVolumeSpecName: "kube-api-access-768vs") pod "fac5c74b-a06e-4752-a352-2d329c307389" (UID: "fac5c74b-a06e-4752-a352-2d329c307389"). InnerVolumeSpecName "kube-api-access-768vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.315212 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8684dcb884-x94mz"] Jan 20 04:06:49 crc kubenswrapper[4898]: W0120 04:06:49.320608 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad336ef7_2c6d_46c5_b7ae_996366226bc5.slice/crio-d0f76d2690ccd5e1faffba4f8f80ab59622eba4fc4bad4e442a559b012070fdd WatchSource:0}: Error finding container d0f76d2690ccd5e1faffba4f8f80ab59622eba4fc4bad4e442a559b012070fdd: Status 404 returned error can't find the container with id d0f76d2690ccd5e1faffba4f8f80ab59622eba4fc4bad4e442a559b012070fdd Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.378891 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.379099 4898 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.379647 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.379968 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-659876b84-8djq9"] Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.382497 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.382518 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.382527 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.382537 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.382547 4898 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fac5c74b-a06e-4752-a352-2d329c307389-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.382556 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-768vs\" (UniqueName: \"kubernetes.io/projected/fac5c74b-a06e-4752-a352-2d329c307389-kube-api-access-768vs\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:49 crc kubenswrapper[4898]: W0120 04:06:49.393949 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83dbc10c_d5eb_435e_97b2_3b615b6e4e10.slice/crio-de2d12af39707da8d4f4885e8d2a4f90c675ad235e13ff5b7a21fbe49acb394e WatchSource:0}: Error finding container de2d12af39707da8d4f4885e8d2a4f90c675ad235e13ff5b7a21fbe49acb394e: Status 404 returned error can't find the container with id de2d12af39707da8d4f4885e8d2a4f90c675ad235e13ff5b7a21fbe49acb394e Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.521858 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-646c85868c-g2f2w"] Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.591225 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.591330 4898 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.632311 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.657778 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b98946766-bzxqb"] Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.842973 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-krtcn"] Jan 20 04:06:49 crc kubenswrapper[4898]: I0120 04:06:49.873054 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6766898678-74xkh"] Jan 20 04:06:49 crc kubenswrapper[4898]: W0120 04:06:49.883558 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d4ffd01_3225_47f8_a88b_00acb1506664.slice/crio-bfdd1d5a4387e807ff7d5396ca7b489ce7003d477e637471fd37fbc28b22f0df WatchSource:0}: Error finding container bfdd1d5a4387e807ff7d5396ca7b489ce7003d477e637471fd37fbc28b22f0df: Status 404 returned error can't find the container with id bfdd1d5a4387e807ff7d5396ca7b489ce7003d477e637471fd37fbc28b22f0df Jan 20 04:06:49 crc kubenswrapper[4898]: W0120 04:06:49.941208 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded637740_b1d6_464e_9167_010b86294ae0.slice/crio-50201ad7e362c2e8552a2853471560817780bc72df94318e1deebe0b645044f9 WatchSource:0}: Error finding container 50201ad7e362c2e8552a2853471560817780bc72df94318e1deebe0b645044f9: Status 404 returned error can't find the container with id 50201ad7e362c2e8552a2853471560817780bc72df94318e1deebe0b645044f9 Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.056563 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.076901 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.091791 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6766898678-74xkh" event={"ID":"ed637740-b1d6-464e-9167-010b86294ae0","Type":"ContainerStarted","Data":"50201ad7e362c2e8552a2853471560817780bc72df94318e1deebe0b645044f9"} Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.121574 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-646c85868c-g2f2w" event={"ID":"85d50ce2-27f1-4ae8-8612-647c1856e03e","Type":"ContainerStarted","Data":"4f3336f380d358ec6bf24796905852e51816c30f0155d051b515085e5298790b"} Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.127168 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659876b84-8djq9" event={"ID":"83dbc10c-d5eb-435e-97b2-3b615b6e4e10","Type":"ContainerStarted","Data":"de2d12af39707da8d4f4885e8d2a4f90c675ad235e13ff5b7a21fbe49acb394e"} Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.138922 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" event={"ID":"1d4ffd01-3225-47f8-a88b-00acb1506664","Type":"ContainerStarted","Data":"bfdd1d5a4387e807ff7d5396ca7b489ce7003d477e637471fd37fbc28b22f0df"} Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.140283 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8684dcb884-x94mz" event={"ID":"ad336ef7-2c6d-46c5-b7ae-996366226bc5","Type":"ContainerStarted","Data":"d0f76d2690ccd5e1faffba4f8f80ab59622eba4fc4bad4e442a559b012070fdd"} Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.168214 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" event={"ID":"87c82f48-c250-400b-b1b0-00a613cbd1e7","Type":"ContainerStarted","Data":"98e1bb042045ba034310ba331a0f99002366fc802378aa2d6acb0320597ca7c7"} Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.168988 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-9zck5" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.248668 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.275940 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-9zck5"] Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.313888 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-9zck5"] Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.370714 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8dc9f49b-vg2wk"] Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.387813 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pfvjb"] Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.843878 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6885b44c89-c7rr5"] Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.848065 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.852557 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.852873 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.854096 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6885b44c89-c7rr5"] Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.952634 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-public-tls-certs\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.952697 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-ovndb-tls-certs\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.952743 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc9qr\" (UniqueName: \"kubernetes.io/projected/3ab69a28-3e73-467a-a062-d881466b26a6-kube-api-access-sc9qr\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.952763 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-config\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.953037 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-internal-tls-certs\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.953192 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-httpd-config\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:50 crc kubenswrapper[4898]: I0120 04:06:50.953302 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-combined-ca-bundle\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.087139 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-public-tls-certs\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.088104 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-ovndb-tls-certs\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.088169 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc9qr\" (UniqueName: \"kubernetes.io/projected/3ab69a28-3e73-467a-a062-d881466b26a6-kube-api-access-sc9qr\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.088190 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-config\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.088263 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-internal-tls-certs\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.088319 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-httpd-config\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.088362 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-combined-ca-bundle\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.098965 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-internal-tls-certs\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.114994 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc9qr\" (UniqueName: \"kubernetes.io/projected/3ab69a28-3e73-467a-a062-d881466b26a6-kube-api-access-sc9qr\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.116225 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-httpd-config\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.119370 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-public-tls-certs\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.134833 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-combined-ca-bundle\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.143891 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-config\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.149297 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab69a28-3e73-467a-a062-d881466b26a6-ovndb-tls-certs\") pod \"neutron-6885b44c89-c7rr5\" (UID: \"3ab69a28-3e73-467a-a062-d881466b26a6\") " pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.193361 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.202600 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8dc9f49b-vg2wk" event={"ID":"00669966-0f38-45df-9949-d2ee09f6d294","Type":"ContainerStarted","Data":"92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e"} Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.202646 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8dc9f49b-vg2wk" event={"ID":"00669966-0f38-45df-9949-d2ee09f6d294","Type":"ContainerStarted","Data":"2b5ea78d05ef6757f75a9298f55fa6afd3b5c576af6a6aab47bd959494c437c2"} Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.220692 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8684dcb884-x94mz" event={"ID":"ad336ef7-2c6d-46c5-b7ae-996366226bc5","Type":"ContainerStarted","Data":"d4d03e786040c281def01ca895588a2c63aaca040fdb547bf96a510845fd54ef"} Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.220882 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.252034 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8684dcb884-x94mz" podStartSLOduration=3.252021432 podStartE2EDuration="3.252021432s" podCreationTimestamp="2026-01-20 04:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:51.249878645 +0000 UTC m=+1057.849666504" watchObservedRunningTime="2026-01-20 04:06:51.252021432 +0000 UTC m=+1057.851809291" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.261896 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6766898678-74xkh" event={"ID":"ed637740-b1d6-464e-9167-010b86294ae0","Type":"ContainerStarted","Data":"e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b"} Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.261946 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6766898678-74xkh" event={"ID":"ed637740-b1d6-464e-9167-010b86294ae0","Type":"ContainerStarted","Data":"f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0"} Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.261958 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.261979 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.297182 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659876b84-8djq9" event={"ID":"83dbc10c-d5eb-435e-97b2-3b615b6e4e10","Type":"ContainerStarted","Data":"530f4bc4cbd9d68e16889f25189f93f44e811b43c5ec406cd33a5bde8fb8a4d7"} Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.297643 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659876b84-8djq9" event={"ID":"83dbc10c-d5eb-435e-97b2-3b615b6e4e10","Type":"ContainerStarted","Data":"dc90e702551cacbe34b6182d4ac5fca400ed757df246ee7c1c0f1b0ea50a5706"} Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.297885 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.297977 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-659876b84-8djq9" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.299563 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6766898678-74xkh" podStartSLOduration=3.299552367 podStartE2EDuration="3.299552367s" podCreationTimestamp="2026-01-20 04:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:51.291800783 +0000 UTC m=+1057.891588642" watchObservedRunningTime="2026-01-20 04:06:51.299552367 +0000 UTC m=+1057.899340226" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.306779 4898 generic.go:334] "Generic (PLEG): container finished" podID="1d4ffd01-3225-47f8-a88b-00acb1506664" containerID="a6c6bbcc584993754336c388c084e70e7bcc1937398e3bc350910838639eb3b7" exitCode=0 Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.310226 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" event={"ID":"1d4ffd01-3225-47f8-a88b-00acb1506664","Type":"ContainerDied","Data":"a6c6bbcc584993754336c388c084e70e7bcc1937398e3bc350910838639eb3b7"} Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.339785 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-659876b84-8djq9" podStartSLOduration=3.339766081 podStartE2EDuration="3.339766081s" podCreationTimestamp="2026-01-20 04:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:51.326663449 +0000 UTC m=+1057.926451308" watchObservedRunningTime="2026-01-20 04:06:51.339766081 +0000 UTC m=+1057.939553940" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.764580 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac5c74b-a06e-4752-a352-2d329c307389" path="/var/lib/kubelet/pods/fac5c74b-a06e-4752-a352-2d329c307389/volumes" Jan 20 04:06:51 crc kubenswrapper[4898]: I0120 04:06:51.817420 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6885b44c89-c7rr5"] Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.338633 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" event={"ID":"1d4ffd01-3225-47f8-a88b-00acb1506664","Type":"ContainerStarted","Data":"8427e59f35f3faf19f0ea544471797793fb2b5065da9d2c5f3f9b8bf0b0c07d8"} Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.339137 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.340661 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8dc9f49b-vg2wk" event={"ID":"00669966-0f38-45df-9949-d2ee09f6d294","Type":"ContainerStarted","Data":"38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b"} Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.340882 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.342317 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rdd4p" event={"ID":"ad38dd1a-677c-4db0-b349-684b1ca42820","Type":"ContainerStarted","Data":"93d609b1228b8176c7493225ee9142b68fd51bce54c9a0bdfb32c0492a32245c"} Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.345786 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6885b44c89-c7rr5" event={"ID":"3ab69a28-3e73-467a-a062-d881466b26a6","Type":"ContainerStarted","Data":"2c83d93c812a160d28cb10d8d06cc35826b739458dae58fbe23cd0e2d0d4363a"} Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.352485 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ws79" event={"ID":"9e533f97-e194-486f-9125-b29cf19e6648","Type":"ContainerStarted","Data":"6eb4a41f50ff2b3f47a448e0930f80008d0a2516f06967d5b06291085a3c7c4d"} Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.352664 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pfvjb" podUID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerName="registry-server" containerID="cri-o://12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181" gracePeriod=2 Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.364053 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" podStartSLOduration=4.364037546 podStartE2EDuration="4.364037546s" podCreationTimestamp="2026-01-20 04:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:52.360255097 +0000 UTC m=+1058.960042956" watchObservedRunningTime="2026-01-20 04:06:52.364037546 +0000 UTC m=+1058.963825405" Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.380167 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8dc9f49b-vg2wk" podStartSLOduration=4.380149733 podStartE2EDuration="4.380149733s" podCreationTimestamp="2026-01-20 04:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:52.380102711 +0000 UTC m=+1058.979890570" watchObservedRunningTime="2026-01-20 04:06:52.380149733 +0000 UTC m=+1058.979937592" Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.399389 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9ws79" podStartSLOduration=2.498635456 podStartE2EDuration="43.399364067s" podCreationTimestamp="2026-01-20 04:06:09 +0000 UTC" firstStartedPulling="2026-01-20 04:06:10.552938743 +0000 UTC m=+1017.152726602" lastFinishedPulling="2026-01-20 04:06:51.453667354 +0000 UTC m=+1058.053455213" observedRunningTime="2026-01-20 04:06:52.395145094 +0000 UTC m=+1058.994932953" watchObservedRunningTime="2026-01-20 04:06:52.399364067 +0000 UTC m=+1058.999151926" Jan 20 04:06:52 crc kubenswrapper[4898]: I0120 04:06:52.416816 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-rdd4p" podStartSLOduration=3.072918449 podStartE2EDuration="43.416787955s" podCreationTimestamp="2026-01-20 04:06:09 +0000 UTC" firstStartedPulling="2026-01-20 04:06:11.073878598 +0000 UTC m=+1017.673666457" lastFinishedPulling="2026-01-20 04:06:51.417748104 +0000 UTC m=+1058.017535963" observedRunningTime="2026-01-20 04:06:52.413883834 +0000 UTC m=+1059.013671693" watchObservedRunningTime="2026-01-20 04:06:52.416787955 +0000 UTC m=+1059.016575814" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.267215 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.367979 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6885b44c89-c7rr5" event={"ID":"3ab69a28-3e73-467a-a062-d881466b26a6","Type":"ContainerStarted","Data":"bcfb7476cb1d2e77856ca87a1950bc0c2954f54411db9887c3b5be72d247c9f0"} Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.370686 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" event={"ID":"87c82f48-c250-400b-b1b0-00a613cbd1e7","Type":"ContainerStarted","Data":"856321bff0cc507646f11b307deedfbbc87ba8306113ab2e21adc33ad11a414f"} Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.372630 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-646c85868c-g2f2w" event={"ID":"85d50ce2-27f1-4ae8-8612-647c1856e03e","Type":"ContainerStarted","Data":"4e16ae582a533313fa2e8066407f452da62e027483dadda6d950164bc8ce5a95"} Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.374908 4898 generic.go:334] "Generic (PLEG): container finished" podID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerID="12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181" exitCode=0 Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.375147 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfvjb" event={"ID":"456251ef-04f7-4abb-bbc3-2270ce7c33a6","Type":"ContainerDied","Data":"12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181"} Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.375179 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfvjb" event={"ID":"456251ef-04f7-4abb-bbc3-2270ce7c33a6","Type":"ContainerDied","Data":"a17b76b8db4314ba69e94efcc8a2544e961dfb4cc8b45bc89c5e1563ea40c1f0"} Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.375196 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfvjb" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.375212 4898 scope.go:117] "RemoveContainer" containerID="12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.400196 4898 scope.go:117] "RemoveContainer" containerID="08fe7a52e54573a47e823db6efcd68e08ad2a4318425c6cc1850ac63e2d5f2e8" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.436482 4898 scope.go:117] "RemoveContainer" containerID="2cfdf5a8fc3c264b956b0acd5d6caab0749d34aa6986413f6961de23d84c95ad" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.454237 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-catalog-content\") pod \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.454372 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfxmd\" (UniqueName: \"kubernetes.io/projected/456251ef-04f7-4abb-bbc3-2270ce7c33a6-kube-api-access-xfxmd\") pod \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.454415 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-utilities\") pod \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\" (UID: \"456251ef-04f7-4abb-bbc3-2270ce7c33a6\") " Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.455540 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-utilities" (OuterVolumeSpecName: "utilities") pod "456251ef-04f7-4abb-bbc3-2270ce7c33a6" (UID: "456251ef-04f7-4abb-bbc3-2270ce7c33a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.458964 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456251ef-04f7-4abb-bbc3-2270ce7c33a6-kube-api-access-xfxmd" (OuterVolumeSpecName: "kube-api-access-xfxmd") pod "456251ef-04f7-4abb-bbc3-2270ce7c33a6" (UID: "456251ef-04f7-4abb-bbc3-2270ce7c33a6"). InnerVolumeSpecName "kube-api-access-xfxmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.459065 4898 scope.go:117] "RemoveContainer" containerID="12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181" Jan 20 04:06:53 crc kubenswrapper[4898]: E0120 04:06:53.459503 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181\": container with ID starting with 12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181 not found: ID does not exist" containerID="12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.459544 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181"} err="failed to get container status \"12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181\": rpc error: code = NotFound desc = could not find container \"12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181\": container with ID starting with 12d0e7d9fa0d26c8ee6e8b0522243695d69d741b425f14a655dc1aaffb534181 not found: ID does not exist" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.459566 4898 scope.go:117] "RemoveContainer" containerID="08fe7a52e54573a47e823db6efcd68e08ad2a4318425c6cc1850ac63e2d5f2e8" Jan 20 04:06:53 crc kubenswrapper[4898]: E0120 04:06:53.459797 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08fe7a52e54573a47e823db6efcd68e08ad2a4318425c6cc1850ac63e2d5f2e8\": container with ID starting with 08fe7a52e54573a47e823db6efcd68e08ad2a4318425c6cc1850ac63e2d5f2e8 not found: ID does not exist" containerID="08fe7a52e54573a47e823db6efcd68e08ad2a4318425c6cc1850ac63e2d5f2e8" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.459861 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08fe7a52e54573a47e823db6efcd68e08ad2a4318425c6cc1850ac63e2d5f2e8"} err="failed to get container status \"08fe7a52e54573a47e823db6efcd68e08ad2a4318425c6cc1850ac63e2d5f2e8\": rpc error: code = NotFound desc = could not find container \"08fe7a52e54573a47e823db6efcd68e08ad2a4318425c6cc1850ac63e2d5f2e8\": container with ID starting with 08fe7a52e54573a47e823db6efcd68e08ad2a4318425c6cc1850ac63e2d5f2e8 not found: ID does not exist" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.459892 4898 scope.go:117] "RemoveContainer" containerID="2cfdf5a8fc3c264b956b0acd5d6caab0749d34aa6986413f6961de23d84c95ad" Jan 20 04:06:53 crc kubenswrapper[4898]: E0120 04:06:53.460166 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cfdf5a8fc3c264b956b0acd5d6caab0749d34aa6986413f6961de23d84c95ad\": container with ID starting with 2cfdf5a8fc3c264b956b0acd5d6caab0749d34aa6986413f6961de23d84c95ad not found: ID does not exist" containerID="2cfdf5a8fc3c264b956b0acd5d6caab0749d34aa6986413f6961de23d84c95ad" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.460188 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfdf5a8fc3c264b956b0acd5d6caab0749d34aa6986413f6961de23d84c95ad"} err="failed to get container status \"2cfdf5a8fc3c264b956b0acd5d6caab0749d34aa6986413f6961de23d84c95ad\": rpc error: code = NotFound desc = could not find container \"2cfdf5a8fc3c264b956b0acd5d6caab0749d34aa6986413f6961de23d84c95ad\": container with ID starting with 2cfdf5a8fc3c264b956b0acd5d6caab0749d34aa6986413f6961de23d84c95ad not found: ID does not exist" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.502655 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "456251ef-04f7-4abb-bbc3-2270ce7c33a6" (UID: "456251ef-04f7-4abb-bbc3-2270ce7c33a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.556563 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfxmd\" (UniqueName: \"kubernetes.io/projected/456251ef-04f7-4abb-bbc3-2270ce7c33a6-kube-api-access-xfxmd\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.556847 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.556903 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456251ef-04f7-4abb-bbc3-2270ce7c33a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.704228 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pfvjb"] Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.714280 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pfvjb"] Jan 20 04:06:53 crc kubenswrapper[4898]: I0120 04:06:53.742847 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" path="/var/lib/kubelet/pods/456251ef-04f7-4abb-bbc3-2270ce7c33a6/volumes" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.385010 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6885b44c89-c7rr5" event={"ID":"3ab69a28-3e73-467a-a062-d881466b26a6","Type":"ContainerStarted","Data":"a55062085515cea6b75a26ba6dec1ef2c7606f95b220b61e1c5ed0e3cdd085da"} Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.385471 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.388308 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" event={"ID":"87c82f48-c250-400b-b1b0-00a613cbd1e7","Type":"ContainerStarted","Data":"992439fbcaa59c6b150d11c953851be24af0ef2c674056bec094770e72c08c69"} Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.391519 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-646c85868c-g2f2w" event={"ID":"85d50ce2-27f1-4ae8-8612-647c1856e03e","Type":"ContainerStarted","Data":"d7c45cc9120db5dcc8a425ac1a8684fa28ac1c417f914555d65726119c2b6e6b"} Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.406362 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6885b44c89-c7rr5" podStartSLOduration=4.406346578 podStartE2EDuration="4.406346578s" podCreationTimestamp="2026-01-20 04:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:06:54.405219943 +0000 UTC m=+1061.005007802" watchObservedRunningTime="2026-01-20 04:06:54.406346578 +0000 UTC m=+1061.006134427" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.426120 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5b98946766-bzxqb" podStartSLOduration=3.268226512 podStartE2EDuration="6.42610154s" podCreationTimestamp="2026-01-20 04:06:48 +0000 UTC" firstStartedPulling="2026-01-20 04:06:49.763656711 +0000 UTC m=+1056.363444580" lastFinishedPulling="2026-01-20 04:06:52.921531749 +0000 UTC m=+1059.521319608" observedRunningTime="2026-01-20 04:06:54.422745724 +0000 UTC m=+1061.022533583" watchObservedRunningTime="2026-01-20 04:06:54.42610154 +0000 UTC m=+1061.025889399" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.446628 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-646c85868c-g2f2w" podStartSLOduration=3.184044514 podStartE2EDuration="6.446606255s" podCreationTimestamp="2026-01-20 04:06:48 +0000 UTC" firstStartedPulling="2026-01-20 04:06:49.662291703 +0000 UTC m=+1056.262079562" lastFinishedPulling="2026-01-20 04:06:52.924853444 +0000 UTC m=+1059.524641303" observedRunningTime="2026-01-20 04:06:54.444912951 +0000 UTC m=+1061.044700810" watchObservedRunningTime="2026-01-20 04:06:54.446606255 +0000 UTC m=+1061.046394114" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.889596 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-668d8597cb-gql24"] Jan 20 04:06:54 crc kubenswrapper[4898]: E0120 04:06:54.890774 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerName="registry-server" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.890866 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerName="registry-server" Jan 20 04:06:54 crc kubenswrapper[4898]: E0120 04:06:54.890938 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerName="extract-content" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.890989 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerName="extract-content" Jan 20 04:06:54 crc kubenswrapper[4898]: E0120 04:06:54.891054 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerName="extract-utilities" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.891108 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerName="extract-utilities" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.893020 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="456251ef-04f7-4abb-bbc3-2270ce7c33a6" containerName="registry-server" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.899574 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.904685 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.905173 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 20 04:06:54 crc kubenswrapper[4898]: I0120 04:06:54.929073 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-668d8597cb-gql24"] Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.087791 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-config-data\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.088129 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lglm\" (UniqueName: \"kubernetes.io/projected/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-kube-api-access-8lglm\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.088192 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-combined-ca-bundle\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.088283 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-public-tls-certs\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.088334 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-logs\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.088360 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-internal-tls-certs\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.088466 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-config-data-custom\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.190867 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-config-data-custom\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.191030 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-config-data\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.191074 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lglm\" (UniqueName: \"kubernetes.io/projected/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-kube-api-access-8lglm\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.191111 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-combined-ca-bundle\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.191187 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-public-tls-certs\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.191218 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-logs\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.191244 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-internal-tls-certs\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.192231 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-logs\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.200242 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-internal-tls-certs\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.200349 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-combined-ca-bundle\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.200537 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-config-data-custom\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.201254 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-config-data\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.209801 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-public-tls-certs\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.210157 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lglm\" (UniqueName: \"kubernetes.io/projected/5b8611ec-0af8-4f71-86ac-f2b2f16f10ed-kube-api-access-8lglm\") pod \"barbican-api-668d8597cb-gql24\" (UID: \"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed\") " pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:55 crc kubenswrapper[4898]: I0120 04:06:55.233573 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:06:57 crc kubenswrapper[4898]: I0120 04:06:57.421833 4898 generic.go:334] "Generic (PLEG): container finished" podID="ad38dd1a-677c-4db0-b349-684b1ca42820" containerID="93d609b1228b8176c7493225ee9142b68fd51bce54c9a0bdfb32c0492a32245c" exitCode=0 Jan 20 04:06:57 crc kubenswrapper[4898]: I0120 04:06:57.421937 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rdd4p" event={"ID":"ad38dd1a-677c-4db0-b349-684b1ca42820","Type":"ContainerDied","Data":"93d609b1228b8176c7493225ee9142b68fd51bce54c9a0bdfb32c0492a32245c"} Jan 20 04:06:59 crc kubenswrapper[4898]: I0120 04:06:59.119957 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:06:59 crc kubenswrapper[4898]: I0120 04:06:59.177684 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b6z25"] Jan 20 04:06:59 crc kubenswrapper[4898]: I0120 04:06:59.177908 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" podUID="bf594a05-0b51-41bc-b43d-ae25e2b98843" containerName="dnsmasq-dns" containerID="cri-o://3d8931377e42d941f048ba2a6fab1b41744ebb803dc18ec98bb417df04719674" gracePeriod=10 Jan 20 04:07:00 crc kubenswrapper[4898]: I0120 04:07:00.245349 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6766898678-74xkh" podUID="ed637740-b1d6-464e-9167-010b86294ae0" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 04:07:00 crc kubenswrapper[4898]: E0120 04:07:00.277063 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf594a05_0b51_41bc_b43d_ae25e2b98843.slice/crio-conmon-3d8931377e42d941f048ba2a6fab1b41744ebb803dc18ec98bb417df04719674.scope\": RecentStats: unable to find data in memory cache]" Jan 20 04:07:00 crc kubenswrapper[4898]: I0120 04:07:00.455108 4898 generic.go:334] "Generic (PLEG): container finished" podID="bf594a05-0b51-41bc-b43d-ae25e2b98843" containerID="3d8931377e42d941f048ba2a6fab1b41744ebb803dc18ec98bb417df04719674" exitCode=0 Jan 20 04:07:00 crc kubenswrapper[4898]: I0120 04:07:00.455156 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" event={"ID":"bf594a05-0b51-41bc-b43d-ae25e2b98843","Type":"ContainerDied","Data":"3d8931377e42d941f048ba2a6fab1b41744ebb803dc18ec98bb417df04719674"} Jan 20 04:07:00 crc kubenswrapper[4898]: I0120 04:07:00.574489 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:07:00 crc kubenswrapper[4898]: I0120 04:07:00.580451 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.305318 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rdd4p" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.311641 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nxgf\" (UniqueName: \"kubernetes.io/projected/ad38dd1a-677c-4db0-b349-684b1ca42820-kube-api-access-9nxgf\") pod \"ad38dd1a-677c-4db0-b349-684b1ca42820\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.312647 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-config-data\") pod \"ad38dd1a-677c-4db0-b349-684b1ca42820\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.312884 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-combined-ca-bundle\") pod \"ad38dd1a-677c-4db0-b349-684b1ca42820\" (UID: \"ad38dd1a-677c-4db0-b349-684b1ca42820\") " Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.319642 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.332515 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad38dd1a-677c-4db0-b349-684b1ca42820-kube-api-access-9nxgf" (OuterVolumeSpecName: "kube-api-access-9nxgf") pod "ad38dd1a-677c-4db0-b349-684b1ca42820" (UID: "ad38dd1a-677c-4db0-b349-684b1ca42820"). InnerVolumeSpecName "kube-api-access-9nxgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.360731 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad38dd1a-677c-4db0-b349-684b1ca42820" (UID: "ad38dd1a-677c-4db0-b349-684b1ca42820"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.413864 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg46r\" (UniqueName: \"kubernetes.io/projected/bf594a05-0b51-41bc-b43d-ae25e2b98843-kube-api-access-wg46r\") pod \"bf594a05-0b51-41bc-b43d-ae25e2b98843\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.414136 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-swift-storage-0\") pod \"bf594a05-0b51-41bc-b43d-ae25e2b98843\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.414240 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-svc\") pod \"bf594a05-0b51-41bc-b43d-ae25e2b98843\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.414300 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-nb\") pod \"bf594a05-0b51-41bc-b43d-ae25e2b98843\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.414373 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-sb\") pod \"bf594a05-0b51-41bc-b43d-ae25e2b98843\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.414465 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-config\") pod \"bf594a05-0b51-41bc-b43d-ae25e2b98843\" (UID: \"bf594a05-0b51-41bc-b43d-ae25e2b98843\") " Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.415305 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.415335 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nxgf\" (UniqueName: \"kubernetes.io/projected/ad38dd1a-677c-4db0-b349-684b1ca42820-kube-api-access-9nxgf\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.421223 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf594a05-0b51-41bc-b43d-ae25e2b98843-kube-api-access-wg46r" (OuterVolumeSpecName: "kube-api-access-wg46r") pod "bf594a05-0b51-41bc-b43d-ae25e2b98843" (UID: "bf594a05-0b51-41bc-b43d-ae25e2b98843"). InnerVolumeSpecName "kube-api-access-wg46r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.476456 4898 generic.go:334] "Generic (PLEG): container finished" podID="9e533f97-e194-486f-9125-b29cf19e6648" containerID="6eb4a41f50ff2b3f47a448e0930f80008d0a2516f06967d5b06291085a3c7c4d" exitCode=0 Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.476646 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ws79" event={"ID":"9e533f97-e194-486f-9125-b29cf19e6648","Type":"ContainerDied","Data":"6eb4a41f50ff2b3f47a448e0930f80008d0a2516f06967d5b06291085a3c7c4d"} Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.479868 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rdd4p" event={"ID":"ad38dd1a-677c-4db0-b349-684b1ca42820","Type":"ContainerDied","Data":"ddd6d6700b1a8452b575a6da8816531bf3c8b80f75ad23cca6fd96e19e6bd782"} Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.479909 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddd6d6700b1a8452b575a6da8816531bf3c8b80f75ad23cca6fd96e19e6bd782" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.479970 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rdd4p" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.485332 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.485693 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" event={"ID":"bf594a05-0b51-41bc-b43d-ae25e2b98843","Type":"ContainerDied","Data":"fec6f46ff0171396ee5d83bdc8c2a53e084cca058a9d2de469ac3d7e94c75076"} Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.485728 4898 scope.go:117] "RemoveContainer" containerID="3d8931377e42d941f048ba2a6fab1b41744ebb803dc18ec98bb417df04719674" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.487761 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf594a05-0b51-41bc-b43d-ae25e2b98843" (UID: "bf594a05-0b51-41bc-b43d-ae25e2b98843"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.494075 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bf594a05-0b51-41bc-b43d-ae25e2b98843" (UID: "bf594a05-0b51-41bc-b43d-ae25e2b98843"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.499729 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bf594a05-0b51-41bc-b43d-ae25e2b98843" (UID: "bf594a05-0b51-41bc-b43d-ae25e2b98843"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.509681 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bf594a05-0b51-41bc-b43d-ae25e2b98843" (UID: "bf594a05-0b51-41bc-b43d-ae25e2b98843"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.511408 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-config" (OuterVolumeSpecName: "config") pod "bf594a05-0b51-41bc-b43d-ae25e2b98843" (UID: "bf594a05-0b51-41bc-b43d-ae25e2b98843"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.516662 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.516683 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.516692 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.516701 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg46r\" (UniqueName: \"kubernetes.io/projected/bf594a05-0b51-41bc-b43d-ae25e2b98843-kube-api-access-wg46r\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.516712 4898 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.516720 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf594a05-0b51-41bc-b43d-ae25e2b98843-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.528535 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-config-data" (OuterVolumeSpecName: "config-data") pod "ad38dd1a-677c-4db0-b349-684b1ca42820" (UID: "ad38dd1a-677c-4db0-b349-684b1ca42820"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.618455 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad38dd1a-677c-4db0-b349-684b1ca42820-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.826906 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b6z25"] Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.834377 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-b6z25"] Jan 20 04:07:01 crc kubenswrapper[4898]: I0120 04:07:01.849583 4898 scope.go:117] "RemoveContainer" containerID="d0a0e075d2d7ebdf7f846f78419c67c8bb005d8862bc3e038226adc475c5a5a9" Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.318122 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-668d8597cb-gql24"] Jan 20 04:07:02 crc kubenswrapper[4898]: W0120 04:07:02.327737 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b8611ec_0af8_4f71_86ac_f2b2f16f10ed.slice/crio-8bf9f3b10b5bfd1b4dab16a6a19b107962d3298c7a9d4a05892bb7a452d4feff WatchSource:0}: Error finding container 8bf9f3b10b5bfd1b4dab16a6a19b107962d3298c7a9d4a05892bb7a452d4feff: Status 404 returned error can't find the container with id 8bf9f3b10b5bfd1b4dab16a6a19b107962d3298c7a9d4a05892bb7a452d4feff Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.495408 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668d8597cb-gql24" event={"ID":"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed","Type":"ContainerStarted","Data":"8bf9f3b10b5bfd1b4dab16a6a19b107962d3298c7a9d4a05892bb7a452d4feff"} Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.505397 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="ceilometer-central-agent" containerID="cri-o://f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489" gracePeriod=30 Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.505660 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"635aa30f-d711-43d2-906b-b951f5c6a9ad","Type":"ContainerStarted","Data":"b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd"} Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.505708 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.506242 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="proxy-httpd" containerID="cri-o://b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd" gracePeriod=30 Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.509203 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="sg-core" containerID="cri-o://e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27" gracePeriod=30 Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.509310 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="ceilometer-notification-agent" containerID="cri-o://6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d" gracePeriod=30 Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.550814 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.926715071 podStartE2EDuration="53.55078954s" podCreationTimestamp="2026-01-20 04:06:09 +0000 UTC" firstStartedPulling="2026-01-20 04:06:11.306573426 +0000 UTC m=+1017.906361285" lastFinishedPulling="2026-01-20 04:07:01.930647895 +0000 UTC m=+1068.530435754" observedRunningTime="2026-01-20 04:07:02.5314011 +0000 UTC m=+1069.131188989" watchObservedRunningTime="2026-01-20 04:07:02.55078954 +0000 UTC m=+1069.150577399" Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.811106 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ws79" Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.945981 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-scripts\") pod \"9e533f97-e194-486f-9125-b29cf19e6648\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.946041 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-combined-ca-bundle\") pod \"9e533f97-e194-486f-9125-b29cf19e6648\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.946185 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-db-sync-config-data\") pod \"9e533f97-e194-486f-9125-b29cf19e6648\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.946257 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-config-data\") pod \"9e533f97-e194-486f-9125-b29cf19e6648\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.946306 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e533f97-e194-486f-9125-b29cf19e6648-etc-machine-id\") pod \"9e533f97-e194-486f-9125-b29cf19e6648\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.946327 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8b58\" (UniqueName: \"kubernetes.io/projected/9e533f97-e194-486f-9125-b29cf19e6648-kube-api-access-z8b58\") pod \"9e533f97-e194-486f-9125-b29cf19e6648\" (UID: \"9e533f97-e194-486f-9125-b29cf19e6648\") " Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.948189 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e533f97-e194-486f-9125-b29cf19e6648-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9e533f97-e194-486f-9125-b29cf19e6648" (UID: "9e533f97-e194-486f-9125-b29cf19e6648"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.951013 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e533f97-e194-486f-9125-b29cf19e6648-kube-api-access-z8b58" (OuterVolumeSpecName: "kube-api-access-z8b58") pod "9e533f97-e194-486f-9125-b29cf19e6648" (UID: "9e533f97-e194-486f-9125-b29cf19e6648"). InnerVolumeSpecName "kube-api-access-z8b58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.951323 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9e533f97-e194-486f-9125-b29cf19e6648" (UID: "9e533f97-e194-486f-9125-b29cf19e6648"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.951632 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-scripts" (OuterVolumeSpecName: "scripts") pod "9e533f97-e194-486f-9125-b29cf19e6648" (UID: "9e533f97-e194-486f-9125-b29cf19e6648"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.969714 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e533f97-e194-486f-9125-b29cf19e6648" (UID: "9e533f97-e194-486f-9125-b29cf19e6648"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:02 crc kubenswrapper[4898]: I0120 04:07:02.993226 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-config-data" (OuterVolumeSpecName: "config-data") pod "9e533f97-e194-486f-9125-b29cf19e6648" (UID: "9e533f97-e194-486f-9125-b29cf19e6648"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.048147 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.048191 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.048217 4898 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.048229 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e533f97-e194-486f-9125-b29cf19e6648-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.048246 4898 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e533f97-e194-486f-9125-b29cf19e6648-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.048261 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8b58\" (UniqueName: \"kubernetes.io/projected/9e533f97-e194-486f-9125-b29cf19e6648-kube-api-access-z8b58\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.519163 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668d8597cb-gql24" event={"ID":"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed","Type":"ContainerStarted","Data":"dc51c6fbe72bc7031b35c9febe4d081ba74b331a637217062e3c41037d4c4ab0"} Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.519510 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.519525 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668d8597cb-gql24" event={"ID":"5b8611ec-0af8-4f71-86ac-f2b2f16f10ed","Type":"ContainerStarted","Data":"c2e9c51f12bd25bb1f4678760c5b36d19e3efcfb3bb3ea426b36138d4dd9134c"} Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.519540 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.522027 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ws79" event={"ID":"9e533f97-e194-486f-9125-b29cf19e6648","Type":"ContainerDied","Data":"3cafbcd20d24961c32820a7f2ba6fcc9e99b4429625b91635a54df79c36aefd4"} Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.522058 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cafbcd20d24961c32820a7f2ba6fcc9e99b4429625b91635a54df79c36aefd4" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.522064 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ws79" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.530294 4898 generic.go:334] "Generic (PLEG): container finished" podID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerID="b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd" exitCode=0 Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.530344 4898 generic.go:334] "Generic (PLEG): container finished" podID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerID="e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27" exitCode=2 Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.530357 4898 generic.go:334] "Generic (PLEG): container finished" podID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerID="f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489" exitCode=0 Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.530383 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"635aa30f-d711-43d2-906b-b951f5c6a9ad","Type":"ContainerDied","Data":"b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd"} Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.530423 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"635aa30f-d711-43d2-906b-b951f5c6a9ad","Type":"ContainerDied","Data":"e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27"} Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.530453 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"635aa30f-d711-43d2-906b-b951f5c6a9ad","Type":"ContainerDied","Data":"f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489"} Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.548000 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-668d8597cb-gql24" podStartSLOduration=9.547978033 podStartE2EDuration="9.547978033s" podCreationTimestamp="2026-01-20 04:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:07:03.541768677 +0000 UTC m=+1070.141556536" watchObservedRunningTime="2026-01-20 04:07:03.547978033 +0000 UTC m=+1070.147765912" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.736062 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf594a05-0b51-41bc-b43d-ae25e2b98843" path="/var/lib/kubelet/pods/bf594a05-0b51-41bc-b43d-ae25e2b98843/volumes" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.862942 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 04:07:03 crc kubenswrapper[4898]: E0120 04:07:03.863309 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e533f97-e194-486f-9125-b29cf19e6648" containerName="cinder-db-sync" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.863330 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e533f97-e194-486f-9125-b29cf19e6648" containerName="cinder-db-sync" Jan 20 04:07:03 crc kubenswrapper[4898]: E0120 04:07:03.863344 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf594a05-0b51-41bc-b43d-ae25e2b98843" containerName="dnsmasq-dns" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.863351 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf594a05-0b51-41bc-b43d-ae25e2b98843" containerName="dnsmasq-dns" Jan 20 04:07:03 crc kubenswrapper[4898]: E0120 04:07:03.863361 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad38dd1a-677c-4db0-b349-684b1ca42820" containerName="heat-db-sync" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.863369 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad38dd1a-677c-4db0-b349-684b1ca42820" containerName="heat-db-sync" Jan 20 04:07:03 crc kubenswrapper[4898]: E0120 04:07:03.863394 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf594a05-0b51-41bc-b43d-ae25e2b98843" containerName="init" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.863399 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf594a05-0b51-41bc-b43d-ae25e2b98843" containerName="init" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.863611 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf594a05-0b51-41bc-b43d-ae25e2b98843" containerName="dnsmasq-dns" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.863632 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad38dd1a-677c-4db0-b349-684b1ca42820" containerName="heat-db-sync" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.863649 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e533f97-e194-486f-9125-b29cf19e6648" containerName="cinder-db-sync" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.864585 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.869384 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.869580 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.869780 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.869879 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rs9pl" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.876929 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.916154 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zkj8r"] Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.918122 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.929739 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zkj8r"] Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.963020 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.963141 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.963215 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.963286 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpdbj\" (UniqueName: \"kubernetes.io/projected/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-kube-api-access-jpdbj\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.963328 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:03 crc kubenswrapper[4898]: I0120 04:07:03.963366 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065160 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065204 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065393 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065520 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065619 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-config\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065645 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065730 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-svc\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065756 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065816 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlhqb\" (UniqueName: \"kubernetes.io/projected/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-kube-api-access-hlhqb\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065863 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpdbj\" (UniqueName: \"kubernetes.io/projected/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-kube-api-access-jpdbj\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065913 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.065955 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.066339 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.071089 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.071124 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.071547 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.079041 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.085569 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpdbj\" (UniqueName: \"kubernetes.io/projected/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-kube-api-access-jpdbj\") pod \"cinder-scheduler-0\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.167380 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.167645 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.167743 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.167886 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-config\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.167975 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-svc\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.168058 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlhqb\" (UniqueName: \"kubernetes.io/projected/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-kube-api-access-hlhqb\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.169174 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.169828 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.170475 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-config\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.170891 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-svc\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.170912 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.181215 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.182642 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.184687 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.204292 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.221128 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.236960 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlhqb\" (UniqueName: \"kubernetes.io/projected/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-kube-api-access-hlhqb\") pod \"dnsmasq-dns-6578955fd5-zkj8r\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.246273 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.371397 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.371800 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data-custom\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.371876 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-scripts\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.371912 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-logs\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.373036 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.373221 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqt5s\" (UniqueName: \"kubernetes.io/projected/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-kube-api-access-fqt5s\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.373247 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.475005 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.475101 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqt5s\" (UniqueName: \"kubernetes.io/projected/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-kube-api-access-fqt5s\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.475136 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.475207 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.475252 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data-custom\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.475323 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-scripts\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.475361 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-logs\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.475652 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.476564 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-logs\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.479245 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-scripts\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.479803 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.485293 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data-custom\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.485345 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.495186 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqt5s\" (UniqueName: \"kubernetes.io/projected/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-kube-api-access-fqt5s\") pod \"cinder-api-0\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.514518 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.761898 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 04:07:04 crc kubenswrapper[4898]: W0120 04:07:04.768325 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda170c9ba_92a5_4a91_b4f1_0a6be53941e5.slice/crio-a4c348c6b3c80fb7ef23519492b0f867c80846e0ac9918ea40ee737954eb0299 WatchSource:0}: Error finding container a4c348c6b3c80fb7ef23519492b0f867c80846e0ac9918ea40ee737954eb0299: Status 404 returned error can't find the container with id a4c348c6b3c80fb7ef23519492b0f867c80846e0ac9918ea40ee737954eb0299 Jan 20 04:07:04 crc kubenswrapper[4898]: I0120 04:07:04.824997 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zkj8r"] Jan 20 04:07:05 crc kubenswrapper[4898]: I0120 04:07:05.017944 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 04:07:05 crc kubenswrapper[4898]: W0120 04:07:05.084199 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52ac8ca0_9523_4d9e_8a71_22f8452a65ba.slice/crio-93b5379a257ef02f669ba27235a07e67616cb39353f202366cd6d6e6abc71d32 WatchSource:0}: Error finding container 93b5379a257ef02f669ba27235a07e67616cb39353f202366cd6d6e6abc71d32: Status 404 returned error can't find the container with id 93b5379a257ef02f669ba27235a07e67616cb39353f202366cd6d6e6abc71d32 Jan 20 04:07:05 crc kubenswrapper[4898]: I0120 04:07:05.391972 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-b6z25" podUID="bf594a05-0b51-41bc-b43d-ae25e2b98843" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Jan 20 04:07:05 crc kubenswrapper[4898]: I0120 04:07:05.558477 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52ac8ca0-9523-4d9e-8a71-22f8452a65ba","Type":"ContainerStarted","Data":"93b5379a257ef02f669ba27235a07e67616cb39353f202366cd6d6e6abc71d32"} Jan 20 04:07:05 crc kubenswrapper[4898]: I0120 04:07:05.559898 4898 generic.go:334] "Generic (PLEG): container finished" podID="109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" containerID="2e3aac3935084d39b64cd2bcc9374f6964e91fffe35cc32c78b822f761e3c5b9" exitCode=0 Jan 20 04:07:05 crc kubenswrapper[4898]: I0120 04:07:05.559973 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" event={"ID":"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e","Type":"ContainerDied","Data":"2e3aac3935084d39b64cd2bcc9374f6964e91fffe35cc32c78b822f761e3c5b9"} Jan 20 04:07:05 crc kubenswrapper[4898]: I0120 04:07:05.560003 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" event={"ID":"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e","Type":"ContainerStarted","Data":"3d4ddb59b4552a7b325cc9846c938ad4b9526c49fcbf96e3f8b769f18ea3259e"} Jan 20 04:07:05 crc kubenswrapper[4898]: I0120 04:07:05.563473 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a170c9ba-92a5-4a91-b4f1-0a6be53941e5","Type":"ContainerStarted","Data":"a4c348c6b3c80fb7ef23519492b0f867c80846e0ac9918ea40ee737954eb0299"} Jan 20 04:07:05 crc kubenswrapper[4898]: I0120 04:07:05.946552 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.367476 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.516285 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-sg-core-conf-yaml\") pod \"635aa30f-d711-43d2-906b-b951f5c6a9ad\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.516496 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-log-httpd\") pod \"635aa30f-d711-43d2-906b-b951f5c6a9ad\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.516535 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-config-data\") pod \"635aa30f-d711-43d2-906b-b951f5c6a9ad\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.516557 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-scripts\") pod \"635aa30f-d711-43d2-906b-b951f5c6a9ad\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.516614 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr2zk\" (UniqueName: \"kubernetes.io/projected/635aa30f-d711-43d2-906b-b951f5c6a9ad-kube-api-access-nr2zk\") pod \"635aa30f-d711-43d2-906b-b951f5c6a9ad\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.516649 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-run-httpd\") pod \"635aa30f-d711-43d2-906b-b951f5c6a9ad\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.516708 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-combined-ca-bundle\") pod \"635aa30f-d711-43d2-906b-b951f5c6a9ad\" (UID: \"635aa30f-d711-43d2-906b-b951f5c6a9ad\") " Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.517405 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "635aa30f-d711-43d2-906b-b951f5c6a9ad" (UID: "635aa30f-d711-43d2-906b-b951f5c6a9ad"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.518225 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "635aa30f-d711-43d2-906b-b951f5c6a9ad" (UID: "635aa30f-d711-43d2-906b-b951f5c6a9ad"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.521671 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-scripts" (OuterVolumeSpecName: "scripts") pod "635aa30f-d711-43d2-906b-b951f5c6a9ad" (UID: "635aa30f-d711-43d2-906b-b951f5c6a9ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.522117 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/635aa30f-d711-43d2-906b-b951f5c6a9ad-kube-api-access-nr2zk" (OuterVolumeSpecName: "kube-api-access-nr2zk") pod "635aa30f-d711-43d2-906b-b951f5c6a9ad" (UID: "635aa30f-d711-43d2-906b-b951f5c6a9ad"). InnerVolumeSpecName "kube-api-access-nr2zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.560659 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "635aa30f-d711-43d2-906b-b951f5c6a9ad" (UID: "635aa30f-d711-43d2-906b-b951f5c6a9ad"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.584179 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a170c9ba-92a5-4a91-b4f1-0a6be53941e5","Type":"ContainerStarted","Data":"2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f"} Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.590793 4898 generic.go:334] "Generic (PLEG): container finished" podID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerID="6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d" exitCode=0 Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.590845 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"635aa30f-d711-43d2-906b-b951f5c6a9ad","Type":"ContainerDied","Data":"6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d"} Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.590871 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"635aa30f-d711-43d2-906b-b951f5c6a9ad","Type":"ContainerDied","Data":"83b5f75969d9259b65fd610041ec7cf663a1c23517984366459c121ece269ce1"} Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.590886 4898 scope.go:117] "RemoveContainer" containerID="b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.591002 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.601421 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52ac8ca0-9523-4d9e-8a71-22f8452a65ba","Type":"ContainerStarted","Data":"164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96"} Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.601487 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52ac8ca0-9523-4d9e-8a71-22f8452a65ba","Type":"ContainerStarted","Data":"b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3"} Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.601653 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="52ac8ca0-9523-4d9e-8a71-22f8452a65ba" containerName="cinder-api-log" containerID="cri-o://b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3" gracePeriod=30 Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.601767 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.602381 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="52ac8ca0-9523-4d9e-8a71-22f8452a65ba" containerName="cinder-api" containerID="cri-o://164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96" gracePeriod=30 Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.610941 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" event={"ID":"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e","Type":"ContainerStarted","Data":"03d8fa03e72cee79da1294ff76f42fd52a90da35a5ca934d03562cb2dc30ba42"} Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.611623 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.619058 4898 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.619098 4898 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.619111 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.619123 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr2zk\" (UniqueName: \"kubernetes.io/projected/635aa30f-d711-43d2-906b-b951f5c6a9ad-kube-api-access-nr2zk\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.619139 4898 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/635aa30f-d711-43d2-906b-b951f5c6a9ad-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.628624 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.6284968380000002 podStartE2EDuration="2.628496838s" podCreationTimestamp="2026-01-20 04:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:07:06.624688169 +0000 UTC m=+1073.224476028" watchObservedRunningTime="2026-01-20 04:07:06.628496838 +0000 UTC m=+1073.228284697" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.651859 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-config-data" (OuterVolumeSpecName: "config-data") pod "635aa30f-d711-43d2-906b-b951f5c6a9ad" (UID: "635aa30f-d711-43d2-906b-b951f5c6a9ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.652270 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "635aa30f-d711-43d2-906b-b951f5c6a9ad" (UID: "635aa30f-d711-43d2-906b-b951f5c6a9ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.653454 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" podStartSLOduration=3.653420202 podStartE2EDuration="3.653420202s" podCreationTimestamp="2026-01-20 04:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:07:06.64573871 +0000 UTC m=+1073.245526579" watchObservedRunningTime="2026-01-20 04:07:06.653420202 +0000 UTC m=+1073.253208061" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.732886 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.732928 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/635aa30f-d711-43d2-906b-b951f5c6a9ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.734296 4898 scope.go:117] "RemoveContainer" containerID="e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.869602 4898 scope.go:117] "RemoveContainer" containerID="6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.912592 4898 scope.go:117] "RemoveContainer" containerID="f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.933800 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.933980 4898 scope.go:117] "RemoveContainer" containerID="b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd" Jan 20 04:07:06 crc kubenswrapper[4898]: E0120 04:07:06.934326 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd\": container with ID starting with b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd not found: ID does not exist" containerID="b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.934388 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd"} err="failed to get container status \"b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd\": rpc error: code = NotFound desc = could not find container \"b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd\": container with ID starting with b5dccd03ca7df37a6998db882e4b4b24337df76b81c850a7028159666823c7cd not found: ID does not exist" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.934416 4898 scope.go:117] "RemoveContainer" containerID="e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27" Jan 20 04:07:06 crc kubenswrapper[4898]: E0120 04:07:06.934728 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27\": container with ID starting with e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27 not found: ID does not exist" containerID="e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.934756 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27"} err="failed to get container status \"e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27\": rpc error: code = NotFound desc = could not find container \"e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27\": container with ID starting with e3e5046674c2c49c56ffd7a7fa5d36e84a6b2d01af4bcc13d76e7bac869f0c27 not found: ID does not exist" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.934775 4898 scope.go:117] "RemoveContainer" containerID="6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d" Jan 20 04:07:06 crc kubenswrapper[4898]: E0120 04:07:06.934985 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d\": container with ID starting with 6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d not found: ID does not exist" containerID="6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.935014 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d"} err="failed to get container status \"6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d\": rpc error: code = NotFound desc = could not find container \"6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d\": container with ID starting with 6709d6cebdcdfc43f66e79d3fe69461a2621f2b482497fd84b5fa54faf86a84d not found: ID does not exist" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.935031 4898 scope.go:117] "RemoveContainer" containerID="f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489" Jan 20 04:07:06 crc kubenswrapper[4898]: E0120 04:07:06.936363 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489\": container with ID starting with f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489 not found: ID does not exist" containerID="f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.936401 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489"} err="failed to get container status \"f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489\": rpc error: code = NotFound desc = could not find container \"f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489\": container with ID starting with f45a1b48a38a2e85ce5566ab23c871429a47909ac06fbbb6d5a8c55465fa3489 not found: ID does not exist" Jan 20 04:07:06 crc kubenswrapper[4898]: I0120 04:07:06.979808 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.009503 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:07 crc kubenswrapper[4898]: E0120 04:07:07.010265 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="ceilometer-notification-agent" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.010277 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="ceilometer-notification-agent" Jan 20 04:07:07 crc kubenswrapper[4898]: E0120 04:07:07.010287 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="ceilometer-central-agent" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.010294 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="ceilometer-central-agent" Jan 20 04:07:07 crc kubenswrapper[4898]: E0120 04:07:07.010318 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="proxy-httpd" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.010325 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="proxy-httpd" Jan 20 04:07:07 crc kubenswrapper[4898]: E0120 04:07:07.010360 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="sg-core" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.010365 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="sg-core" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.010663 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="sg-core" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.010676 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="ceilometer-central-agent" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.010714 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="ceilometer-notification-agent" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.010737 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" containerName="proxy-httpd" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.030791 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.039129 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.049122 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.049698 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.141558 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-scripts\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.141638 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n6c6\" (UniqueName: \"kubernetes.io/projected/153bc822-6bc7-4b54-a1e2-99badd3d0211-kube-api-access-4n6c6\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.141660 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.141721 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-log-httpd\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.141755 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-run-httpd\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.141820 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.141839 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-config-data\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.244017 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.244352 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-config-data\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.244406 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-scripts\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.244473 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n6c6\" (UniqueName: \"kubernetes.io/projected/153bc822-6bc7-4b54-a1e2-99badd3d0211-kube-api-access-4n6c6\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.244502 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.244565 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-log-httpd\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.244608 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-run-httpd\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.245095 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-run-httpd\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.246237 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-log-httpd\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.251975 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-config-data\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.256558 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.258730 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-scripts\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.260035 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.261551 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n6c6\" (UniqueName: \"kubernetes.io/projected/153bc822-6bc7-4b54-a1e2-99badd3d0211-kube-api-access-4n6c6\") pod \"ceilometer-0\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.362952 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.641982 4898 generic.go:334] "Generic (PLEG): container finished" podID="52ac8ca0-9523-4d9e-8a71-22f8452a65ba" containerID="b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3" exitCode=143 Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.642495 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52ac8ca0-9523-4d9e-8a71-22f8452a65ba","Type":"ContainerDied","Data":"b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3"} Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.645266 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a170c9ba-92a5-4a91-b4f1-0a6be53941e5","Type":"ContainerStarted","Data":"fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893"} Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.668852 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.530085573 podStartE2EDuration="4.668833118s" podCreationTimestamp="2026-01-20 04:07:03 +0000 UTC" firstStartedPulling="2026-01-20 04:07:04.77012035 +0000 UTC m=+1071.369908209" lastFinishedPulling="2026-01-20 04:07:05.908867895 +0000 UTC m=+1072.508655754" observedRunningTime="2026-01-20 04:07:07.664233273 +0000 UTC m=+1074.264021132" watchObservedRunningTime="2026-01-20 04:07:07.668833118 +0000 UTC m=+1074.268620977" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.731482 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="635aa30f-d711-43d2-906b-b951f5c6a9ad" path="/var/lib/kubelet/pods/635aa30f-d711-43d2-906b-b951f5c6a9ad/volumes" Jan 20 04:07:07 crc kubenswrapper[4898]: I0120 04:07:07.836182 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:07 crc kubenswrapper[4898]: W0120 04:07:07.841190 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod153bc822_6bc7_4b54_a1e2_99badd3d0211.slice/crio-a1ce4b03c4444ed247fe03db5f4e637aa1610a8203b2bcbfcaf92f3404fefbbc WatchSource:0}: Error finding container a1ce4b03c4444ed247fe03db5f4e637aa1610a8203b2bcbfcaf92f3404fefbbc: Status 404 returned error can't find the container with id a1ce4b03c4444ed247fe03db5f4e637aa1610a8203b2bcbfcaf92f3404fefbbc Jan 20 04:07:08 crc kubenswrapper[4898]: I0120 04:07:08.661836 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"153bc822-6bc7-4b54-a1e2-99badd3d0211","Type":"ContainerStarted","Data":"69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7"} Jan 20 04:07:08 crc kubenswrapper[4898]: I0120 04:07:08.662326 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"153bc822-6bc7-4b54-a1e2-99badd3d0211","Type":"ContainerStarted","Data":"a1ce4b03c4444ed247fe03db5f4e637aa1610a8203b2bcbfcaf92f3404fefbbc"} Jan 20 04:07:09 crc kubenswrapper[4898]: I0120 04:07:09.205296 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 20 04:07:09 crc kubenswrapper[4898]: I0120 04:07:09.671422 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"153bc822-6bc7-4b54-a1e2-99badd3d0211","Type":"ContainerStarted","Data":"c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c"} Jan 20 04:07:10 crc kubenswrapper[4898]: I0120 04:07:10.681143 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"153bc822-6bc7-4b54-a1e2-99badd3d0211","Type":"ContainerStarted","Data":"be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9"} Jan 20 04:07:11 crc kubenswrapper[4898]: I0120 04:07:11.684958 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:07:11 crc kubenswrapper[4898]: I0120 04:07:11.690279 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"153bc822-6bc7-4b54-a1e2-99badd3d0211","Type":"ContainerStarted","Data":"0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094"} Jan 20 04:07:11 crc kubenswrapper[4898]: I0120 04:07:11.690531 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 04:07:11 crc kubenswrapper[4898]: I0120 04:07:11.726409 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.189532664 podStartE2EDuration="5.726382692s" podCreationTimestamp="2026-01-20 04:07:06 +0000 UTC" firstStartedPulling="2026-01-20 04:07:07.843754509 +0000 UTC m=+1074.443542368" lastFinishedPulling="2026-01-20 04:07:11.380604537 +0000 UTC m=+1077.980392396" observedRunningTime="2026-01-20 04:07:11.720070494 +0000 UTC m=+1078.319858363" watchObservedRunningTime="2026-01-20 04:07:11.726382692 +0000 UTC m=+1078.326170551" Jan 20 04:07:11 crc kubenswrapper[4898]: I0120 04:07:11.776164 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-668d8597cb-gql24" Jan 20 04:07:11 crc kubenswrapper[4898]: I0120 04:07:11.828024 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6766898678-74xkh"] Jan 20 04:07:11 crc kubenswrapper[4898]: I0120 04:07:11.828308 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6766898678-74xkh" podUID="ed637740-b1d6-464e-9167-010b86294ae0" containerName="barbican-api-log" containerID="cri-o://f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0" gracePeriod=30 Jan 20 04:07:11 crc kubenswrapper[4898]: I0120 04:07:11.828387 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6766898678-74xkh" podUID="ed637740-b1d6-464e-9167-010b86294ae0" containerName="barbican-api" containerID="cri-o://e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b" gracePeriod=30 Jan 20 04:07:12 crc kubenswrapper[4898]: I0120 04:07:12.700545 4898 generic.go:334] "Generic (PLEG): container finished" podID="ed637740-b1d6-464e-9167-010b86294ae0" containerID="f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0" exitCode=143 Jan 20 04:07:12 crc kubenswrapper[4898]: I0120 04:07:12.700629 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6766898678-74xkh" event={"ID":"ed637740-b1d6-464e-9167-010b86294ae0","Type":"ContainerDied","Data":"f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0"} Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.250598 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.316302 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-krtcn"] Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.316602 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" podUID="1d4ffd01-3225-47f8-a88b-00acb1506664" containerName="dnsmasq-dns" containerID="cri-o://8427e59f35f3faf19f0ea544471797793fb2b5065da9d2c5f3f9b8bf0b0c07d8" gracePeriod=10 Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.475161 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.534981 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.732626 4898 generic.go:334] "Generic (PLEG): container finished" podID="1d4ffd01-3225-47f8-a88b-00acb1506664" containerID="8427e59f35f3faf19f0ea544471797793fb2b5065da9d2c5f3f9b8bf0b0c07d8" exitCode=0 Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.733297 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a170c9ba-92a5-4a91-b4f1-0a6be53941e5" containerName="cinder-scheduler" containerID="cri-o://2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f" gracePeriod=30 Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.733419 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" event={"ID":"1d4ffd01-3225-47f8-a88b-00acb1506664","Type":"ContainerDied","Data":"8427e59f35f3faf19f0ea544471797793fb2b5065da9d2c5f3f9b8bf0b0c07d8"} Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.734863 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a170c9ba-92a5-4a91-b4f1-0a6be53941e5" containerName="probe" containerID="cri-o://fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893" gracePeriod=30 Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.867548 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.967347 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6766898678-74xkh" podUID="ed637740-b1d6-464e-9167-010b86294ae0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:57454->10.217.0.157:9311: read: connection reset by peer" Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.967391 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6766898678-74xkh" podUID="ed637740-b1d6-464e-9167-010b86294ae0" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:57466->10.217.0.157:9311: read: connection reset by peer" Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.992916 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-sb\") pod \"1d4ffd01-3225-47f8-a88b-00acb1506664\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.993096 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-swift-storage-0\") pod \"1d4ffd01-3225-47f8-a88b-00acb1506664\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.993154 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5rrq\" (UniqueName: \"kubernetes.io/projected/1d4ffd01-3225-47f8-a88b-00acb1506664-kube-api-access-h5rrq\") pod \"1d4ffd01-3225-47f8-a88b-00acb1506664\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.993241 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-nb\") pod \"1d4ffd01-3225-47f8-a88b-00acb1506664\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.993321 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-svc\") pod \"1d4ffd01-3225-47f8-a88b-00acb1506664\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " Jan 20 04:07:14 crc kubenswrapper[4898]: I0120 04:07:14.993378 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-config\") pod \"1d4ffd01-3225-47f8-a88b-00acb1506664\" (UID: \"1d4ffd01-3225-47f8-a88b-00acb1506664\") " Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.005835 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d4ffd01-3225-47f8-a88b-00acb1506664-kube-api-access-h5rrq" (OuterVolumeSpecName: "kube-api-access-h5rrq") pod "1d4ffd01-3225-47f8-a88b-00acb1506664" (UID: "1d4ffd01-3225-47f8-a88b-00acb1506664"). InnerVolumeSpecName "kube-api-access-h5rrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.039596 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-config" (OuterVolumeSpecName: "config") pod "1d4ffd01-3225-47f8-a88b-00acb1506664" (UID: "1d4ffd01-3225-47f8-a88b-00acb1506664"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.044277 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d4ffd01-3225-47f8-a88b-00acb1506664" (UID: "1d4ffd01-3225-47f8-a88b-00acb1506664"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.046795 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d4ffd01-3225-47f8-a88b-00acb1506664" (UID: "1d4ffd01-3225-47f8-a88b-00acb1506664"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.059352 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d4ffd01-3225-47f8-a88b-00acb1506664" (UID: "1d4ffd01-3225-47f8-a88b-00acb1506664"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.063088 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d4ffd01-3225-47f8-a88b-00acb1506664" (UID: "1d4ffd01-3225-47f8-a88b-00acb1506664"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.095657 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.095967 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.095978 4898 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.095989 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5rrq\" (UniqueName: \"kubernetes.io/projected/1d4ffd01-3225-47f8-a88b-00acb1506664-kube-api-access-h5rrq\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.096003 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.096011 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4ffd01-3225-47f8-a88b-00acb1506664-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.387500 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.503253 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data-custom\") pod \"ed637740-b1d6-464e-9167-010b86294ae0\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.503359 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data\") pod \"ed637740-b1d6-464e-9167-010b86294ae0\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.503421 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-combined-ca-bundle\") pod \"ed637740-b1d6-464e-9167-010b86294ae0\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.503475 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kc8j\" (UniqueName: \"kubernetes.io/projected/ed637740-b1d6-464e-9167-010b86294ae0-kube-api-access-9kc8j\") pod \"ed637740-b1d6-464e-9167-010b86294ae0\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.503583 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed637740-b1d6-464e-9167-010b86294ae0-logs\") pod \"ed637740-b1d6-464e-9167-010b86294ae0\" (UID: \"ed637740-b1d6-464e-9167-010b86294ae0\") " Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.504088 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed637740-b1d6-464e-9167-010b86294ae0-logs" (OuterVolumeSpecName: "logs") pod "ed637740-b1d6-464e-9167-010b86294ae0" (UID: "ed637740-b1d6-464e-9167-010b86294ae0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.512611 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed637740-b1d6-464e-9167-010b86294ae0-kube-api-access-9kc8j" (OuterVolumeSpecName: "kube-api-access-9kc8j") pod "ed637740-b1d6-464e-9167-010b86294ae0" (UID: "ed637740-b1d6-464e-9167-010b86294ae0"). InnerVolumeSpecName "kube-api-access-9kc8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.512916 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ed637740-b1d6-464e-9167-010b86294ae0" (UID: "ed637740-b1d6-464e-9167-010b86294ae0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.539537 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed637740-b1d6-464e-9167-010b86294ae0" (UID: "ed637740-b1d6-464e-9167-010b86294ae0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.558886 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data" (OuterVolumeSpecName: "config-data") pod "ed637740-b1d6-464e-9167-010b86294ae0" (UID: "ed637740-b1d6-464e-9167-010b86294ae0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.605487 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kc8j\" (UniqueName: \"kubernetes.io/projected/ed637740-b1d6-464e-9167-010b86294ae0-kube-api-access-9kc8j\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.605516 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed637740-b1d6-464e-9167-010b86294ae0-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.605528 4898 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.605536 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.605545 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed637740-b1d6-464e-9167-010b86294ae0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.742522 4898 generic.go:334] "Generic (PLEG): container finished" podID="ed637740-b1d6-464e-9167-010b86294ae0" containerID="e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b" exitCode=0 Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.742571 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6766898678-74xkh" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.742604 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6766898678-74xkh" event={"ID":"ed637740-b1d6-464e-9167-010b86294ae0","Type":"ContainerDied","Data":"e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b"} Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.742653 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6766898678-74xkh" event={"ID":"ed637740-b1d6-464e-9167-010b86294ae0","Type":"ContainerDied","Data":"50201ad7e362c2e8552a2853471560817780bc72df94318e1deebe0b645044f9"} Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.742672 4898 scope.go:117] "RemoveContainer" containerID="e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.746154 4898 generic.go:334] "Generic (PLEG): container finished" podID="a170c9ba-92a5-4a91-b4f1-0a6be53941e5" containerID="fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893" exitCode=0 Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.746185 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a170c9ba-92a5-4a91-b4f1-0a6be53941e5","Type":"ContainerDied","Data":"fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893"} Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.748361 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" event={"ID":"1d4ffd01-3225-47f8-a88b-00acb1506664","Type":"ContainerDied","Data":"bfdd1d5a4387e807ff7d5396ca7b489ce7003d477e637471fd37fbc28b22f0df"} Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.748462 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-krtcn" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.777787 4898 scope.go:117] "RemoveContainer" containerID="f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.788177 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6766898678-74xkh"] Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.796095 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6766898678-74xkh"] Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.804117 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-krtcn"] Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.805509 4898 scope.go:117] "RemoveContainer" containerID="e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b" Jan 20 04:07:15 crc kubenswrapper[4898]: E0120 04:07:15.805938 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b\": container with ID starting with e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b not found: ID does not exist" containerID="e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.805974 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b"} err="failed to get container status \"e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b\": rpc error: code = NotFound desc = could not find container \"e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b\": container with ID starting with e2889618e34bce417ce218ea5ef3b675ea2f14d11d278b86714a2c661ed3053b not found: ID does not exist" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.805998 4898 scope.go:117] "RemoveContainer" containerID="f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0" Jan 20 04:07:15 crc kubenswrapper[4898]: E0120 04:07:15.806364 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0\": container with ID starting with f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0 not found: ID does not exist" containerID="f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.806394 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0"} err="failed to get container status \"f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0\": rpc error: code = NotFound desc = could not find container \"f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0\": container with ID starting with f3d909c07f16057de2f2384fa5c793ae91730495d4b2fa0ac3164bca5fb565f0 not found: ID does not exist" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.806412 4898 scope.go:117] "RemoveContainer" containerID="8427e59f35f3faf19f0ea544471797793fb2b5065da9d2c5f3f9b8bf0b0c07d8" Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.812507 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-krtcn"] Jan 20 04:07:15 crc kubenswrapper[4898]: I0120 04:07:15.822932 4898 scope.go:117] "RemoveContainer" containerID="a6c6bbcc584993754336c388c084e70e7bcc1937398e3bc350910838639eb3b7" Jan 20 04:07:16 crc kubenswrapper[4898]: I0120 04:07:16.454615 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 20 04:07:17 crc kubenswrapper[4898]: I0120 04:07:17.735349 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d4ffd01-3225-47f8-a88b-00acb1506664" path="/var/lib/kubelet/pods/1d4ffd01-3225-47f8-a88b-00acb1506664/volumes" Jan 20 04:07:17 crc kubenswrapper[4898]: I0120 04:07:17.736323 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed637740-b1d6-464e-9167-010b86294ae0" path="/var/lib/kubelet/pods/ed637740-b1d6-464e-9167-010b86294ae0/volumes" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.446396 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.597254 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data\") pod \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.597886 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-etc-machine-id\") pod \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.598001 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-combined-ca-bundle\") pod \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.598007 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a170c9ba-92a5-4a91-b4f1-0a6be53941e5" (UID: "a170c9ba-92a5-4a91-b4f1-0a6be53941e5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.598260 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data-custom\") pod \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.598379 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-scripts\") pod \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.598500 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpdbj\" (UniqueName: \"kubernetes.io/projected/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-kube-api-access-jpdbj\") pod \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\" (UID: \"a170c9ba-92a5-4a91-b4f1-0a6be53941e5\") " Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.598959 4898 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.604566 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a170c9ba-92a5-4a91-b4f1-0a6be53941e5" (UID: "a170c9ba-92a5-4a91-b4f1-0a6be53941e5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.604584 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-scripts" (OuterVolumeSpecName: "scripts") pod "a170c9ba-92a5-4a91-b4f1-0a6be53941e5" (UID: "a170c9ba-92a5-4a91-b4f1-0a6be53941e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.610914 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-kube-api-access-jpdbj" (OuterVolumeSpecName: "kube-api-access-jpdbj") pod "a170c9ba-92a5-4a91-b4f1-0a6be53941e5" (UID: "a170c9ba-92a5-4a91-b4f1-0a6be53941e5"). InnerVolumeSpecName "kube-api-access-jpdbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.655210 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a170c9ba-92a5-4a91-b4f1-0a6be53941e5" (UID: "a170c9ba-92a5-4a91-b4f1-0a6be53941e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.701236 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.701287 4898 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.701305 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.701322 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpdbj\" (UniqueName: \"kubernetes.io/projected/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-kube-api-access-jpdbj\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.729410 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data" (OuterVolumeSpecName: "config-data") pod "a170c9ba-92a5-4a91-b4f1-0a6be53941e5" (UID: "a170c9ba-92a5-4a91-b4f1-0a6be53941e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.781951 4898 generic.go:334] "Generic (PLEG): container finished" podID="a170c9ba-92a5-4a91-b4f1-0a6be53941e5" containerID="2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f" exitCode=0 Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.782021 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.782034 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a170c9ba-92a5-4a91-b4f1-0a6be53941e5","Type":"ContainerDied","Data":"2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f"} Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.783280 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a170c9ba-92a5-4a91-b4f1-0a6be53941e5","Type":"ContainerDied","Data":"a4c348c6b3c80fb7ef23519492b0f867c80846e0ac9918ea40ee737954eb0299"} Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.783353 4898 scope.go:117] "RemoveContainer" containerID="fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.806026 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a170c9ba-92a5-4a91-b4f1-0a6be53941e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.829368 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.829799 4898 scope.go:117] "RemoveContainer" containerID="2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.839376 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.863613 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 04:07:18 crc kubenswrapper[4898]: E0120 04:07:18.864024 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a170c9ba-92a5-4a91-b4f1-0a6be53941e5" containerName="cinder-scheduler" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.864042 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a170c9ba-92a5-4a91-b4f1-0a6be53941e5" containerName="cinder-scheduler" Jan 20 04:07:18 crc kubenswrapper[4898]: E0120 04:07:18.864071 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4ffd01-3225-47f8-a88b-00acb1506664" containerName="dnsmasq-dns" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.864078 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4ffd01-3225-47f8-a88b-00acb1506664" containerName="dnsmasq-dns" Jan 20 04:07:18 crc kubenswrapper[4898]: E0120 04:07:18.864087 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed637740-b1d6-464e-9167-010b86294ae0" containerName="barbican-api" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.864093 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed637740-b1d6-464e-9167-010b86294ae0" containerName="barbican-api" Jan 20 04:07:18 crc kubenswrapper[4898]: E0120 04:07:18.864110 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed637740-b1d6-464e-9167-010b86294ae0" containerName="barbican-api-log" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.864115 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed637740-b1d6-464e-9167-010b86294ae0" containerName="barbican-api-log" Jan 20 04:07:18 crc kubenswrapper[4898]: E0120 04:07:18.864131 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4ffd01-3225-47f8-a88b-00acb1506664" containerName="init" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.864136 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4ffd01-3225-47f8-a88b-00acb1506664" containerName="init" Jan 20 04:07:18 crc kubenswrapper[4898]: E0120 04:07:18.864147 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a170c9ba-92a5-4a91-b4f1-0a6be53941e5" containerName="probe" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.864153 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a170c9ba-92a5-4a91-b4f1-0a6be53941e5" containerName="probe" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.864302 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4ffd01-3225-47f8-a88b-00acb1506664" containerName="dnsmasq-dns" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.864311 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="a170c9ba-92a5-4a91-b4f1-0a6be53941e5" containerName="cinder-scheduler" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.864334 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed637740-b1d6-464e-9167-010b86294ae0" containerName="barbican-api" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.864342 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed637740-b1d6-464e-9167-010b86294ae0" containerName="barbican-api-log" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.864354 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="a170c9ba-92a5-4a91-b4f1-0a6be53941e5" containerName="probe" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.865519 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.867635 4898 scope.go:117] "RemoveContainer" containerID="fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.868842 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 20 04:07:18 crc kubenswrapper[4898]: E0120 04:07:18.869820 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893\": container with ID starting with fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893 not found: ID does not exist" containerID="fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.869938 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893"} err="failed to get container status \"fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893\": rpc error: code = NotFound desc = could not find container \"fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893\": container with ID starting with fc0fd48988d1de24a2c332224112bfa9d194a4d7261fef09d804f411c7678893 not found: ID does not exist" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.870043 4898 scope.go:117] "RemoveContainer" containerID="2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f" Jan 20 04:07:18 crc kubenswrapper[4898]: E0120 04:07:18.870649 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f\": container with ID starting with 2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f not found: ID does not exist" containerID="2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.870755 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f"} err="failed to get container status \"2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f\": rpc error: code = NotFound desc = could not find container \"2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f\": container with ID starting with 2ceddefd90ba9773449edd6cf6cde2d9303189cce46302c3d68cb234380a099f not found: ID does not exist" Jan 20 04:07:18 crc kubenswrapper[4898]: I0120 04:07:18.874801 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.015244 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.015342 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.015366 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.015396 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.015460 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpc7w\" (UniqueName: \"kubernetes.io/projected/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-kube-api-access-mpc7w\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.015482 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.116966 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.117290 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.117847 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.117953 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.118082 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.118225 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpc7w\" (UniqueName: \"kubernetes.io/projected/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-kube-api-access-mpc7w\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.117992 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.122215 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.125312 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.125533 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.129770 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.141718 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpc7w\" (UniqueName: \"kubernetes.io/projected/6b9d909f-718d-4eb5-8321-f1f20f54e2a4-kube-api-access-mpc7w\") pod \"cinder-scheduler-0\" (UID: \"6b9d909f-718d-4eb5-8321-f1f20f54e2a4\") " pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.189321 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.390565 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.648087 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.737236 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a170c9ba-92a5-4a91-b4f1-0a6be53941e5" path="/var/lib/kubelet/pods/a170c9ba-92a5-4a91-b4f1-0a6be53941e5/volumes" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.808056 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6b9d909f-718d-4eb5-8321-f1f20f54e2a4","Type":"ContainerStarted","Data":"b80321d40f4e1761b01464b305595e72e9b6705d375f88338f1c3b0fc27e4d98"} Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.999683 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-659876b84-8djq9" Jan 20 04:07:19 crc kubenswrapper[4898]: I0120 04:07:19.999857 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-659876b84-8djq9" Jan 20 04:07:20 crc kubenswrapper[4898]: I0120 04:07:20.246469 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-8684dcb884-x94mz" Jan 20 04:07:20 crc kubenswrapper[4898]: I0120 04:07:20.821624 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6b9d909f-718d-4eb5-8321-f1f20f54e2a4","Type":"ContainerStarted","Data":"f61ae57bb2db561e0361c4235ba014d316f7684b4f17dac4eca9eb277bcc9bc5"} Jan 20 04:07:21 crc kubenswrapper[4898]: I0120 04:07:21.216767 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6885b44c89-c7rr5" Jan 20 04:07:21 crc kubenswrapper[4898]: I0120 04:07:21.286081 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8dc9f49b-vg2wk"] Jan 20 04:07:21 crc kubenswrapper[4898]: I0120 04:07:21.286305 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8dc9f49b-vg2wk" podUID="00669966-0f38-45df-9949-d2ee09f6d294" containerName="neutron-api" containerID="cri-o://92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e" gracePeriod=30 Jan 20 04:07:21 crc kubenswrapper[4898]: I0120 04:07:21.286703 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8dc9f49b-vg2wk" podUID="00669966-0f38-45df-9949-d2ee09f6d294" containerName="neutron-httpd" containerID="cri-o://38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b" gracePeriod=30 Jan 20 04:07:21 crc kubenswrapper[4898]: I0120 04:07:21.831908 4898 generic.go:334] "Generic (PLEG): container finished" podID="00669966-0f38-45df-9949-d2ee09f6d294" containerID="38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b" exitCode=0 Jan 20 04:07:21 crc kubenswrapper[4898]: I0120 04:07:21.831991 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8dc9f49b-vg2wk" event={"ID":"00669966-0f38-45df-9949-d2ee09f6d294","Type":"ContainerDied","Data":"38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b"} Jan 20 04:07:21 crc kubenswrapper[4898]: I0120 04:07:21.833825 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6b9d909f-718d-4eb5-8321-f1f20f54e2a4","Type":"ContainerStarted","Data":"b90dff3e6f845574a2a3ce6f524643da3686cbfda566a4e8098851f61ed9829d"} Jan 20 04:07:21 crc kubenswrapper[4898]: I0120 04:07:21.858587 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.85856602 podStartE2EDuration="3.85856602s" podCreationTimestamp="2026-01-20 04:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:07:21.852076316 +0000 UTC m=+1088.451864175" watchObservedRunningTime="2026-01-20 04:07:21.85856602 +0000 UTC m=+1088.458353879" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.190210 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.364248 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-659bc66b4c-5cnqm"] Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.367121 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.370294 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.372032 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.372411 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.381175 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-659bc66b4c-5cnqm"] Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.506858 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.507149 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="ceilometer-central-agent" containerID="cri-o://69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7" gracePeriod=30 Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.507262 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="sg-core" containerID="cri-o://be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9" gracePeriod=30 Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.507367 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="ceilometer-notification-agent" containerID="cri-o://c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c" gracePeriod=30 Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.507359 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="proxy-httpd" containerID="cri-o://0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094" gracePeriod=30 Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.519723 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.558204 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-config-data\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.558243 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-internal-tls-certs\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.558268 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-etc-swift\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.558298 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-log-httpd\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.558372 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-combined-ca-bundle\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.558401 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx8b9\" (UniqueName: \"kubernetes.io/projected/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-kube-api-access-lx8b9\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.558423 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-run-httpd\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.558523 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-public-tls-certs\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.660662 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-log-httpd\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.660712 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-combined-ca-bundle\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.660739 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx8b9\" (UniqueName: \"kubernetes.io/projected/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-kube-api-access-lx8b9\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.660758 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-run-httpd\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.660828 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-public-tls-certs\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.660914 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-config-data\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.660991 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-internal-tls-certs\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.661014 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-etc-swift\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.662112 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-run-httpd\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.662326 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-log-httpd\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.667142 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-combined-ca-bundle\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.689582 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-etc-swift\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.690218 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-internal-tls-certs\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.692010 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx8b9\" (UniqueName: \"kubernetes.io/projected/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-kube-api-access-lx8b9\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.694294 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-config-data\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.695047 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a-public-tls-certs\") pod \"swift-proxy-659bc66b4c-5cnqm\" (UID: \"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a\") " pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.707579 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.832227 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.938000 4898 generic.go:334] "Generic (PLEG): container finished" podID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerID="0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094" exitCode=0 Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.938060 4898 generic.go:334] "Generic (PLEG): container finished" podID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerID="be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9" exitCode=2 Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.938083 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"153bc822-6bc7-4b54-a1e2-99badd3d0211","Type":"ContainerDied","Data":"0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094"} Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.938128 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"153bc822-6bc7-4b54-a1e2-99badd3d0211","Type":"ContainerDied","Data":"be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9"} Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.942553 4898 generic.go:334] "Generic (PLEG): container finished" podID="00669966-0f38-45df-9949-d2ee09f6d294" containerID="92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e" exitCode=0 Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.942626 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8dc9f49b-vg2wk" event={"ID":"00669966-0f38-45df-9949-d2ee09f6d294","Type":"ContainerDied","Data":"92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e"} Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.942658 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8dc9f49b-vg2wk" event={"ID":"00669966-0f38-45df-9949-d2ee09f6d294","Type":"ContainerDied","Data":"2b5ea78d05ef6757f75a9298f55fa6afd3b5c576af6a6aab47bd959494c437c2"} Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.942706 4898 scope.go:117] "RemoveContainer" containerID="38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.942719 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8dc9f49b-vg2wk" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.969333 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xtzg\" (UniqueName: \"kubernetes.io/projected/00669966-0f38-45df-9949-d2ee09f6d294-kube-api-access-5xtzg\") pod \"00669966-0f38-45df-9949-d2ee09f6d294\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.969380 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-httpd-config\") pod \"00669966-0f38-45df-9949-d2ee09f6d294\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.969619 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-ovndb-tls-certs\") pod \"00669966-0f38-45df-9949-d2ee09f6d294\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.969645 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-config\") pod \"00669966-0f38-45df-9949-d2ee09f6d294\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.969767 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-combined-ca-bundle\") pod \"00669966-0f38-45df-9949-d2ee09f6d294\" (UID: \"00669966-0f38-45df-9949-d2ee09f6d294\") " Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.972700 4898 scope.go:117] "RemoveContainer" containerID="92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.979043 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "00669966-0f38-45df-9949-d2ee09f6d294" (UID: "00669966-0f38-45df-9949-d2ee09f6d294"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:24 crc kubenswrapper[4898]: I0120 04:07:24.979159 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00669966-0f38-45df-9949-d2ee09f6d294-kube-api-access-5xtzg" (OuterVolumeSpecName: "kube-api-access-5xtzg") pod "00669966-0f38-45df-9949-d2ee09f6d294" (UID: "00669966-0f38-45df-9949-d2ee09f6d294"). InnerVolumeSpecName "kube-api-access-5xtzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.002649 4898 scope.go:117] "RemoveContainer" containerID="38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b" Jan 20 04:07:25 crc kubenswrapper[4898]: E0120 04:07:25.003180 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b\": container with ID starting with 38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b not found: ID does not exist" containerID="38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.003221 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b"} err="failed to get container status \"38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b\": rpc error: code = NotFound desc = could not find container \"38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b\": container with ID starting with 38aa111244657701050ca9b1ed4a3472d906c7c86cbe8b10f9857ae7d663c58b not found: ID does not exist" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.003248 4898 scope.go:117] "RemoveContainer" containerID="92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e" Jan 20 04:07:25 crc kubenswrapper[4898]: E0120 04:07:25.003613 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e\": container with ID starting with 92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e not found: ID does not exist" containerID="92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.003670 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e"} err="failed to get container status \"92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e\": rpc error: code = NotFound desc = could not find container \"92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e\": container with ID starting with 92040fe9a7df3b6ee06c4e4beb932de055b8f602004623096cabb8bbe4a7554e not found: ID does not exist" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.018390 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-config" (OuterVolumeSpecName: "config") pod "00669966-0f38-45df-9949-d2ee09f6d294" (UID: "00669966-0f38-45df-9949-d2ee09f6d294"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.019204 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00669966-0f38-45df-9949-d2ee09f6d294" (UID: "00669966-0f38-45df-9949-d2ee09f6d294"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.039918 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "00669966-0f38-45df-9949-d2ee09f6d294" (UID: "00669966-0f38-45df-9949-d2ee09f6d294"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.071562 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.071591 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xtzg\" (UniqueName: \"kubernetes.io/projected/00669966-0f38-45df-9949-d2ee09f6d294-kube-api-access-5xtzg\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.071601 4898 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.071609 4898 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.071619 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/00669966-0f38-45df-9949-d2ee09f6d294-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.193028 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 20 04:07:25 crc kubenswrapper[4898]: E0120 04:07:25.194954 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00669966-0f38-45df-9949-d2ee09f6d294" containerName="neutron-api" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.195069 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="00669966-0f38-45df-9949-d2ee09f6d294" containerName="neutron-api" Jan 20 04:07:25 crc kubenswrapper[4898]: E0120 04:07:25.195169 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00669966-0f38-45df-9949-d2ee09f6d294" containerName="neutron-httpd" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.195254 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="00669966-0f38-45df-9949-d2ee09f6d294" containerName="neutron-httpd" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.195649 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="00669966-0f38-45df-9949-d2ee09f6d294" containerName="neutron-api" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.195794 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="00669966-0f38-45df-9949-d2ee09f6d294" containerName="neutron-httpd" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.196849 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.210603 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.211547 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.211834 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.212101 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-gsrb9" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.277919 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8dc9f49b-vg2wk"] Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.286624 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8dc9f49b-vg2wk"] Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.317075 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/37019a69-852b-47d8-8090-db3f78bbf2a5-openstack-config\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.317163 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/37019a69-852b-47d8-8090-db3f78bbf2a5-openstack-config-secret\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.317207 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp4sk\" (UniqueName: \"kubernetes.io/projected/37019a69-852b-47d8-8090-db3f78bbf2a5-kube-api-access-tp4sk\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.317259 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37019a69-852b-47d8-8090-db3f78bbf2a5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.371657 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-659bc66b4c-5cnqm"] Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.421069 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/37019a69-852b-47d8-8090-db3f78bbf2a5-openstack-config\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.421340 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/37019a69-852b-47d8-8090-db3f78bbf2a5-openstack-config-secret\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.421387 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp4sk\" (UniqueName: \"kubernetes.io/projected/37019a69-852b-47d8-8090-db3f78bbf2a5-kube-api-access-tp4sk\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.421456 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37019a69-852b-47d8-8090-db3f78bbf2a5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.422285 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/37019a69-852b-47d8-8090-db3f78bbf2a5-openstack-config\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.425636 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37019a69-852b-47d8-8090-db3f78bbf2a5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.428966 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/37019a69-852b-47d8-8090-db3f78bbf2a5-openstack-config-secret\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.438218 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp4sk\" (UniqueName: \"kubernetes.io/projected/37019a69-852b-47d8-8090-db3f78bbf2a5-kube-api-access-tp4sk\") pod \"openstackclient\" (UID: \"37019a69-852b-47d8-8090-db3f78bbf2a5\") " pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.534802 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.735180 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00669966-0f38-45df-9949-d2ee09f6d294" path="/var/lib/kubelet/pods/00669966-0f38-45df-9949-d2ee09f6d294/volumes" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.954389 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659bc66b4c-5cnqm" event={"ID":"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a","Type":"ContainerStarted","Data":"6bea2bfbd07f9f5ef87204444440504a3caa5a39566e31b42a948cd78075bd20"} Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.954453 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659bc66b4c-5cnqm" event={"ID":"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a","Type":"ContainerStarted","Data":"ce8c73854a41779f13c28ae2c0fd4ea1619cd877852bfcdd726c22721fcc9c00"} Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.954467 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659bc66b4c-5cnqm" event={"ID":"9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a","Type":"ContainerStarted","Data":"21c99dfdff0550cf61600ebe91dce877b9a966ae89fc0d0da45f13b8f3574212"} Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.954494 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.954511 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.958508 4898 generic.go:334] "Generic (PLEG): container finished" podID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerID="69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7" exitCode=0 Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.958562 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"153bc822-6bc7-4b54-a1e2-99badd3d0211","Type":"ContainerDied","Data":"69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7"} Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.983733 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 20 04:07:25 crc kubenswrapper[4898]: I0120 04:07:25.984000 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-659bc66b4c-5cnqm" podStartSLOduration=1.983981889 podStartE2EDuration="1.983981889s" podCreationTimestamp="2026-01-20 04:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:07:25.971242558 +0000 UTC m=+1092.571030417" watchObservedRunningTime="2026-01-20 04:07:25.983981889 +0000 UTC m=+1092.583769748" Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.889864 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.950437 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-run-httpd\") pod \"153bc822-6bc7-4b54-a1e2-99badd3d0211\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.950537 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-scripts\") pod \"153bc822-6bc7-4b54-a1e2-99badd3d0211\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.950596 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-sg-core-conf-yaml\") pod \"153bc822-6bc7-4b54-a1e2-99badd3d0211\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.950734 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-log-httpd\") pod \"153bc822-6bc7-4b54-a1e2-99badd3d0211\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.950763 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-combined-ca-bundle\") pod \"153bc822-6bc7-4b54-a1e2-99badd3d0211\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.950782 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-config-data\") pod \"153bc822-6bc7-4b54-a1e2-99badd3d0211\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.950849 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n6c6\" (UniqueName: \"kubernetes.io/projected/153bc822-6bc7-4b54-a1e2-99badd3d0211-kube-api-access-4n6c6\") pod \"153bc822-6bc7-4b54-a1e2-99badd3d0211\" (UID: \"153bc822-6bc7-4b54-a1e2-99badd3d0211\") " Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.954798 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "153bc822-6bc7-4b54-a1e2-99badd3d0211" (UID: "153bc822-6bc7-4b54-a1e2-99badd3d0211"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.958803 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "153bc822-6bc7-4b54-a1e2-99badd3d0211" (UID: "153bc822-6bc7-4b54-a1e2-99badd3d0211"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.979629 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153bc822-6bc7-4b54-a1e2-99badd3d0211-kube-api-access-4n6c6" (OuterVolumeSpecName: "kube-api-access-4n6c6") pod "153bc822-6bc7-4b54-a1e2-99badd3d0211" (UID: "153bc822-6bc7-4b54-a1e2-99badd3d0211"). InnerVolumeSpecName "kube-api-access-4n6c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:26 crc kubenswrapper[4898]: I0120 04:07:26.984006 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-scripts" (OuterVolumeSpecName: "scripts") pod "153bc822-6bc7-4b54-a1e2-99badd3d0211" (UID: "153bc822-6bc7-4b54-a1e2-99badd3d0211"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.049649 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"37019a69-852b-47d8-8090-db3f78bbf2a5","Type":"ContainerStarted","Data":"6d8a3609c1c99b2357f53b28d818d4a37e61898f6ad3ae19109c3714c90ab2ad"} Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.070383 4898 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.070410 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n6c6\" (UniqueName: \"kubernetes.io/projected/153bc822-6bc7-4b54-a1e2-99badd3d0211-kube-api-access-4n6c6\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.070442 4898 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/153bc822-6bc7-4b54-a1e2-99badd3d0211-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.070582 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.079795 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "153bc822-6bc7-4b54-a1e2-99badd3d0211" (UID: "153bc822-6bc7-4b54-a1e2-99badd3d0211"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.099517 4898 generic.go:334] "Generic (PLEG): container finished" podID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerID="c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c" exitCode=0 Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.101443 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.102307 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"153bc822-6bc7-4b54-a1e2-99badd3d0211","Type":"ContainerDied","Data":"c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c"} Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.102343 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"153bc822-6bc7-4b54-a1e2-99badd3d0211","Type":"ContainerDied","Data":"a1ce4b03c4444ed247fe03db5f4e637aa1610a8203b2bcbfcaf92f3404fefbbc"} Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.102372 4898 scope.go:117] "RemoveContainer" containerID="0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.135089 4898 scope.go:117] "RemoveContainer" containerID="be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.172150 4898 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.176648 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-config-data" (OuterVolumeSpecName: "config-data") pod "153bc822-6bc7-4b54-a1e2-99badd3d0211" (UID: "153bc822-6bc7-4b54-a1e2-99badd3d0211"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.187706 4898 scope.go:117] "RemoveContainer" containerID="c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.202018 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "153bc822-6bc7-4b54-a1e2-99badd3d0211" (UID: "153bc822-6bc7-4b54-a1e2-99badd3d0211"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.213885 4898 scope.go:117] "RemoveContainer" containerID="69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.230920 4898 scope.go:117] "RemoveContainer" containerID="0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094" Jan 20 04:07:27 crc kubenswrapper[4898]: E0120 04:07:27.231624 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094\": container with ID starting with 0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094 not found: ID does not exist" containerID="0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.231663 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094"} err="failed to get container status \"0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094\": rpc error: code = NotFound desc = could not find container \"0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094\": container with ID starting with 0ed3d4547a067a30640ccf9d7296e6f06e06598977de327049b11580cf586094 not found: ID does not exist" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.231684 4898 scope.go:117] "RemoveContainer" containerID="be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9" Jan 20 04:07:27 crc kubenswrapper[4898]: E0120 04:07:27.232036 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9\": container with ID starting with be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9 not found: ID does not exist" containerID="be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.232069 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9"} err="failed to get container status \"be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9\": rpc error: code = NotFound desc = could not find container \"be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9\": container with ID starting with be9278f1785ecafd6749f882fa3569ae61fcee883912f6f8892c3e299e6d67b9 not found: ID does not exist" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.232082 4898 scope.go:117] "RemoveContainer" containerID="c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c" Jan 20 04:07:27 crc kubenswrapper[4898]: E0120 04:07:27.232307 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c\": container with ID starting with c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c not found: ID does not exist" containerID="c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.232328 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c"} err="failed to get container status \"c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c\": rpc error: code = NotFound desc = could not find container \"c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c\": container with ID starting with c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c not found: ID does not exist" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.232340 4898 scope.go:117] "RemoveContainer" containerID="69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7" Jan 20 04:07:27 crc kubenswrapper[4898]: E0120 04:07:27.232631 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7\": container with ID starting with 69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7 not found: ID does not exist" containerID="69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.232655 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7"} err="failed to get container status \"69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7\": rpc error: code = NotFound desc = could not find container \"69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7\": container with ID starting with 69bb4f55ec4d70ef71a54733fb23e0200f934274007bc066bffb0d2201b5b2e7 not found: ID does not exist" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.273754 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.273795 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153bc822-6bc7-4b54-a1e2-99badd3d0211-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.435455 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.450139 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.460443 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:27 crc kubenswrapper[4898]: E0120 04:07:27.460788 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="ceilometer-central-agent" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.460807 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="ceilometer-central-agent" Jan 20 04:07:27 crc kubenswrapper[4898]: E0120 04:07:27.460838 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="ceilometer-notification-agent" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.460845 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="ceilometer-notification-agent" Jan 20 04:07:27 crc kubenswrapper[4898]: E0120 04:07:27.460857 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="sg-core" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.460864 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="sg-core" Jan 20 04:07:27 crc kubenswrapper[4898]: E0120 04:07:27.460883 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="proxy-httpd" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.460888 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="proxy-httpd" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.461040 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="ceilometer-notification-agent" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.461053 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="ceilometer-central-agent" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.461081 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="sg-core" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.461094 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" containerName="proxy-httpd" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.462653 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.464774 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.465899 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.474819 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.577894 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-config-data\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.578009 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-scripts\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.578052 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-log-httpd\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.578112 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-run-httpd\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.578150 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.578229 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqrnk\" (UniqueName: \"kubernetes.io/projected/2383b50e-7f1b-458e-ab0e-c1bfc5698227-kube-api-access-vqrnk\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.578303 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.682363 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-config-data\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.682455 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-scripts\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.682484 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-log-httpd\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.682528 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-run-httpd\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.682543 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.682558 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqrnk\" (UniqueName: \"kubernetes.io/projected/2383b50e-7f1b-458e-ab0e-c1bfc5698227-kube-api-access-vqrnk\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.682624 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.686915 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-log-httpd\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.689870 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-run-httpd\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.695052 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.695133 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-config-data\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.696121 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-scripts\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.696658 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.729320 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqrnk\" (UniqueName: \"kubernetes.io/projected/2383b50e-7f1b-458e-ab0e-c1bfc5698227-kube-api-access-vqrnk\") pod \"ceilometer-0\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " pod="openstack/ceilometer-0" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.736908 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153bc822-6bc7-4b54-a1e2-99badd3d0211" path="/var/lib/kubelet/pods/153bc822-6bc7-4b54-a1e2-99badd3d0211/volumes" Jan 20 04:07:27 crc kubenswrapper[4898]: I0120 04:07:27.834156 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:28 crc kubenswrapper[4898]: I0120 04:07:28.292492 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:29 crc kubenswrapper[4898]: I0120 04:07:29.122738 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2383b50e-7f1b-458e-ab0e-c1bfc5698227","Type":"ContainerStarted","Data":"413f25524efa579f8098d07763a45fa767a4259120cc580470af5dbc98627e0e"} Jan 20 04:07:29 crc kubenswrapper[4898]: I0120 04:07:29.390621 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 20 04:07:30 crc kubenswrapper[4898]: I0120 04:07:30.132279 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2383b50e-7f1b-458e-ab0e-c1bfc5698227","Type":"ContainerStarted","Data":"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e"} Jan 20 04:07:31 crc kubenswrapper[4898]: I0120 04:07:31.144409 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2383b50e-7f1b-458e-ab0e-c1bfc5698227","Type":"ContainerStarted","Data":"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857"} Jan 20 04:07:34 crc kubenswrapper[4898]: I0120 04:07:34.718294 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:34 crc kubenswrapper[4898]: I0120 04:07:34.718895 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-659bc66b4c-5cnqm" Jan 20 04:07:35 crc kubenswrapper[4898]: I0120 04:07:35.952537 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:36 crc kubenswrapper[4898]: E0120 04:07:36.902197 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod153bc822_6bc7_4b54_a1e2_99badd3d0211.slice/crio-conmon-c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod153bc822_6bc7_4b54_a1e2_99badd3d0211.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d4ffd01_3225_47f8_a88b_00acb1506664.slice/crio-bfdd1d5a4387e807ff7d5396ca7b489ce7003d477e637471fd37fbc28b22f0df\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda170c9ba_92a5_4a91_b4f1_0a6be53941e5.slice/crio-a4c348c6b3c80fb7ef23519492b0f867c80846e0ac9918ea40ee737954eb0299\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded637740_b1d6_464e_9167_010b86294ae0.slice/crio-50201ad7e362c2e8552a2853471560817780bc72df94318e1deebe0b645044f9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod153bc822_6bc7_4b54_a1e2_99badd3d0211.slice/crio-a1ce4b03c4444ed247fe03db5f4e637aa1610a8203b2bcbfcaf92f3404fefbbc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00669966_0f38_45df_9949_d2ee09f6d294.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52ac8ca0_9523_4d9e_8a71_22f8452a65ba.slice/crio-164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod153bc822_6bc7_4b54_a1e2_99badd3d0211.slice/crio-c3fd95e6b0f2d3901c357a2173ffd1897897f3d30ee484ad4c85b91eaae3eb7c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00669966_0f38_45df_9949_d2ee09f6d294.slice/crio-2b5ea78d05ef6757f75a9298f55fa6afd3b5c576af6a6aab47bd959494c437c2\": RecentStats: unable to find data in memory cache]" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.139673 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.214350 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2383b50e-7f1b-458e-ab0e-c1bfc5698227","Type":"ContainerStarted","Data":"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f"} Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.216454 4898 generic.go:334] "Generic (PLEG): container finished" podID="52ac8ca0-9523-4d9e-8a71-22f8452a65ba" containerID="164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96" exitCode=137 Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.216500 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.216519 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52ac8ca0-9523-4d9e-8a71-22f8452a65ba","Type":"ContainerDied","Data":"164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96"} Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.216542 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"52ac8ca0-9523-4d9e-8a71-22f8452a65ba","Type":"ContainerDied","Data":"93b5379a257ef02f669ba27235a07e67616cb39353f202366cd6d6e6abc71d32"} Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.216560 4898 scope.go:117] "RemoveContainer" containerID="164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.219535 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"37019a69-852b-47d8-8090-db3f78bbf2a5","Type":"ContainerStarted","Data":"2f2170bf5e6f54030b2d1142512af0cdb0d49a14ee43bc61aa90bc6e3c7a3840"} Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.238431 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.168273837 podStartE2EDuration="12.238407143s" podCreationTimestamp="2026-01-20 04:07:25 +0000 UTC" firstStartedPulling="2026-01-20 04:07:25.97957666 +0000 UTC m=+1092.579364519" lastFinishedPulling="2026-01-20 04:07:36.049709966 +0000 UTC m=+1102.649497825" observedRunningTime="2026-01-20 04:07:37.236187612 +0000 UTC m=+1103.835975471" watchObservedRunningTime="2026-01-20 04:07:37.238407143 +0000 UTC m=+1103.838195002" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.242924 4898 scope.go:117] "RemoveContainer" containerID="b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.267669 4898 scope.go:117] "RemoveContainer" containerID="164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96" Jan 20 04:07:37 crc kubenswrapper[4898]: E0120 04:07:37.268134 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96\": container with ID starting with 164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96 not found: ID does not exist" containerID="164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.268191 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96"} err="failed to get container status \"164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96\": rpc error: code = NotFound desc = could not find container \"164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96\": container with ID starting with 164830e7d869ee51bdda815c7c5713408a1654d0bd0f9f66223241425f9aef96 not found: ID does not exist" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.268246 4898 scope.go:117] "RemoveContainer" containerID="b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3" Jan 20 04:07:37 crc kubenswrapper[4898]: E0120 04:07:37.268609 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3\": container with ID starting with b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3 not found: ID does not exist" containerID="b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.268640 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3"} err="failed to get container status \"b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3\": rpc error: code = NotFound desc = could not find container \"b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3\": container with ID starting with b3eb4de46e3167646c5537c90e7436d709bbe40a31d54c5e68fe7704dddf1ec3 not found: ID does not exist" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.270217 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-combined-ca-bundle\") pod \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.270282 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqt5s\" (UniqueName: \"kubernetes.io/projected/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-kube-api-access-fqt5s\") pod \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.270344 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data\") pod \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.271353 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-logs\") pod \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.271424 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data-custom\") pod \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.271653 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-scripts\") pod \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.271702 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-etc-machine-id\") pod \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\" (UID: \"52ac8ca0-9523-4d9e-8a71-22f8452a65ba\") " Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.273627 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-logs" (OuterVolumeSpecName: "logs") pod "52ac8ca0-9523-4d9e-8a71-22f8452a65ba" (UID: "52ac8ca0-9523-4d9e-8a71-22f8452a65ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.276920 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "52ac8ca0-9523-4d9e-8a71-22f8452a65ba" (UID: "52ac8ca0-9523-4d9e-8a71-22f8452a65ba"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.278485 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "52ac8ca0-9523-4d9e-8a71-22f8452a65ba" (UID: "52ac8ca0-9523-4d9e-8a71-22f8452a65ba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.281022 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-scripts" (OuterVolumeSpecName: "scripts") pod "52ac8ca0-9523-4d9e-8a71-22f8452a65ba" (UID: "52ac8ca0-9523-4d9e-8a71-22f8452a65ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.295488 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-kube-api-access-fqt5s" (OuterVolumeSpecName: "kube-api-access-fqt5s") pod "52ac8ca0-9523-4d9e-8a71-22f8452a65ba" (UID: "52ac8ca0-9523-4d9e-8a71-22f8452a65ba"). InnerVolumeSpecName "kube-api-access-fqt5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.308726 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52ac8ca0-9523-4d9e-8a71-22f8452a65ba" (UID: "52ac8ca0-9523-4d9e-8a71-22f8452a65ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.334032 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data" (OuterVolumeSpecName: "config-data") pod "52ac8ca0-9523-4d9e-8a71-22f8452a65ba" (UID: "52ac8ca0-9523-4d9e-8a71-22f8452a65ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.374537 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.374570 4898 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.374580 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.374588 4898 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.374598 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.374611 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqt5s\" (UniqueName: \"kubernetes.io/projected/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-kube-api-access-fqt5s\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.374620 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ac8ca0-9523-4d9e-8a71-22f8452a65ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.550221 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.566141 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.593977 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 20 04:07:37 crc kubenswrapper[4898]: E0120 04:07:37.595084 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ac8ca0-9523-4d9e-8a71-22f8452a65ba" containerName="cinder-api" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.595106 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ac8ca0-9523-4d9e-8a71-22f8452a65ba" containerName="cinder-api" Jan 20 04:07:37 crc kubenswrapper[4898]: E0120 04:07:37.595124 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ac8ca0-9523-4d9e-8a71-22f8452a65ba" containerName="cinder-api-log" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.595131 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ac8ca0-9523-4d9e-8a71-22f8452a65ba" containerName="cinder-api-log" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.595651 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ac8ca0-9523-4d9e-8a71-22f8452a65ba" containerName="cinder-api" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.595677 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ac8ca0-9523-4d9e-8a71-22f8452a65ba" containerName="cinder-api-log" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.596784 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.599395 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.599657 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.600948 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.621917 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.684286 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.684364 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.684387 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-config-data-custom\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.684408 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-scripts\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.684528 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.684614 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stlnm\" (UniqueName: \"kubernetes.io/projected/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-kube-api-access-stlnm\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.684654 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.684700 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-config-data\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.684761 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-logs\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.732099 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ac8ca0-9523-4d9e-8a71-22f8452a65ba" path="/var/lib/kubelet/pods/52ac8ca0-9523-4d9e-8a71-22f8452a65ba/volumes" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.786686 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.786740 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-config-data-custom\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.786776 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-scripts\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.786814 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.786879 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stlnm\" (UniqueName: \"kubernetes.io/projected/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-kube-api-access-stlnm\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.786913 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.786944 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-logs\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.786961 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-config-data\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.786993 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.788110 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.788749 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-logs\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.794031 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.794896 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.795272 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-scripts\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.795704 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-config-data\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.795784 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-config-data-custom\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.796851 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.808187 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stlnm\" (UniqueName: \"kubernetes.io/projected/bf226116-7c6f-473d-9ffe-14cb5e7bbdc5-kube-api-access-stlnm\") pod \"cinder-api-0\" (UID: \"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5\") " pod="openstack/cinder-api-0" Jan 20 04:07:37 crc kubenswrapper[4898]: I0120 04:07:37.914475 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 04:07:38 crc kubenswrapper[4898]: I0120 04:07:38.271029 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2383b50e-7f1b-458e-ab0e-c1bfc5698227","Type":"ContainerStarted","Data":"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d"} Jan 20 04:07:38 crc kubenswrapper[4898]: I0120 04:07:38.271473 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="ceilometer-central-agent" containerID="cri-o://c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e" gracePeriod=30 Jan 20 04:07:38 crc kubenswrapper[4898]: I0120 04:07:38.271726 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 04:07:38 crc kubenswrapper[4898]: I0120 04:07:38.272013 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="proxy-httpd" containerID="cri-o://43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d" gracePeriod=30 Jan 20 04:07:38 crc kubenswrapper[4898]: I0120 04:07:38.272057 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="sg-core" containerID="cri-o://8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f" gracePeriod=30 Jan 20 04:07:38 crc kubenswrapper[4898]: I0120 04:07:38.272100 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="ceilometer-notification-agent" containerID="cri-o://194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857" gracePeriod=30 Jan 20 04:07:38 crc kubenswrapper[4898]: I0120 04:07:38.309547 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.749064996 podStartE2EDuration="11.309527663s" podCreationTimestamp="2026-01-20 04:07:27 +0000 UTC" firstStartedPulling="2026-01-20 04:07:28.313443223 +0000 UTC m=+1094.913231082" lastFinishedPulling="2026-01-20 04:07:37.87390589 +0000 UTC m=+1104.473693749" observedRunningTime="2026-01-20 04:07:38.301073555 +0000 UTC m=+1104.900861414" watchObservedRunningTime="2026-01-20 04:07:38.309527663 +0000 UTC m=+1104.909315522" Jan 20 04:07:38 crc kubenswrapper[4898]: I0120 04:07:38.398082 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 04:07:38 crc kubenswrapper[4898]: W0120 04:07:38.398390 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf226116_7c6f_473d_9ffe_14cb5e7bbdc5.slice/crio-c320ff2e9203e0631d201b24e70560217916d64222aa58a31b31ea07ef1fc7fa WatchSource:0}: Error finding container c320ff2e9203e0631d201b24e70560217916d64222aa58a31b31ea07ef1fc7fa: Status 404 returned error can't find the container with id c320ff2e9203e0631d201b24e70560217916d64222aa58a31b31ea07ef1fc7fa Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.218602 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.317215 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-sg-core-conf-yaml\") pod \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.317286 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-run-httpd\") pod \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.317420 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqrnk\" (UniqueName: \"kubernetes.io/projected/2383b50e-7f1b-458e-ab0e-c1bfc5698227-kube-api-access-vqrnk\") pod \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.317457 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-config-data\") pod \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.317485 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-scripts\") pod \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.317594 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-combined-ca-bundle\") pod \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.317636 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-log-httpd\") pod \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\" (UID: \"2383b50e-7f1b-458e-ab0e-c1bfc5698227\") " Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.317654 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2383b50e-7f1b-458e-ab0e-c1bfc5698227" (UID: "2383b50e-7f1b-458e-ab0e-c1bfc5698227"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.318042 4898 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.318429 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2383b50e-7f1b-458e-ab0e-c1bfc5698227" (UID: "2383b50e-7f1b-458e-ab0e-c1bfc5698227"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.323967 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-scripts" (OuterVolumeSpecName: "scripts") pod "2383b50e-7f1b-458e-ab0e-c1bfc5698227" (UID: "2383b50e-7f1b-458e-ab0e-c1bfc5698227"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.324133 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5","Type":"ContainerStarted","Data":"3c5a9a07d17fc36e7a529ab7db20754fc28e693c9b2cf1e662d0b086411fd171"} Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.324172 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5","Type":"ContainerStarted","Data":"c320ff2e9203e0631d201b24e70560217916d64222aa58a31b31ea07ef1fc7fa"} Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.325581 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2383b50e-7f1b-458e-ab0e-c1bfc5698227-kube-api-access-vqrnk" (OuterVolumeSpecName: "kube-api-access-vqrnk") pod "2383b50e-7f1b-458e-ab0e-c1bfc5698227" (UID: "2383b50e-7f1b-458e-ab0e-c1bfc5698227"). InnerVolumeSpecName "kube-api-access-vqrnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.332100 4898 generic.go:334] "Generic (PLEG): container finished" podID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerID="43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d" exitCode=0 Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.332128 4898 generic.go:334] "Generic (PLEG): container finished" podID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerID="8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f" exitCode=2 Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.332137 4898 generic.go:334] "Generic (PLEG): container finished" podID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerID="194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857" exitCode=0 Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.332145 4898 generic.go:334] "Generic (PLEG): container finished" podID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerID="c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e" exitCode=0 Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.332167 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2383b50e-7f1b-458e-ab0e-c1bfc5698227","Type":"ContainerDied","Data":"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d"} Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.332193 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2383b50e-7f1b-458e-ab0e-c1bfc5698227","Type":"ContainerDied","Data":"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f"} Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.332204 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2383b50e-7f1b-458e-ab0e-c1bfc5698227","Type":"ContainerDied","Data":"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857"} Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.332214 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2383b50e-7f1b-458e-ab0e-c1bfc5698227","Type":"ContainerDied","Data":"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e"} Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.332223 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2383b50e-7f1b-458e-ab0e-c1bfc5698227","Type":"ContainerDied","Data":"413f25524efa579f8098d07763a45fa767a4259120cc580470af5dbc98627e0e"} Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.332238 4898 scope.go:117] "RemoveContainer" containerID="43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.332256 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.372505 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2383b50e-7f1b-458e-ab0e-c1bfc5698227" (UID: "2383b50e-7f1b-458e-ab0e-c1bfc5698227"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.395142 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2383b50e-7f1b-458e-ab0e-c1bfc5698227" (UID: "2383b50e-7f1b-458e-ab0e-c1bfc5698227"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.411812 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-config-data" (OuterVolumeSpecName: "config-data") pod "2383b50e-7f1b-458e-ab0e-c1bfc5698227" (UID: "2383b50e-7f1b-458e-ab0e-c1bfc5698227"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.419651 4898 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.419682 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqrnk\" (UniqueName: \"kubernetes.io/projected/2383b50e-7f1b-458e-ab0e-c1bfc5698227-kube-api-access-vqrnk\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.419695 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.419704 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.419712 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2383b50e-7f1b-458e-ab0e-c1bfc5698227-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.419720 4898 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2383b50e-7f1b-458e-ab0e-c1bfc5698227-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.420721 4898 scope.go:117] "RemoveContainer" containerID="8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.442389 4898 scope.go:117] "RemoveContainer" containerID="194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.469685 4898 scope.go:117] "RemoveContainer" containerID="c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.520388 4898 scope.go:117] "RemoveContainer" containerID="43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d" Jan 20 04:07:39 crc kubenswrapper[4898]: E0120 04:07:39.520861 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d\": container with ID starting with 43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d not found: ID does not exist" containerID="43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.520894 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d"} err="failed to get container status \"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d\": rpc error: code = NotFound desc = could not find container \"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d\": container with ID starting with 43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.520915 4898 scope.go:117] "RemoveContainer" containerID="8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f" Jan 20 04:07:39 crc kubenswrapper[4898]: E0120 04:07:39.521218 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f\": container with ID starting with 8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f not found: ID does not exist" containerID="8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.521259 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f"} err="failed to get container status \"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f\": rpc error: code = NotFound desc = could not find container \"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f\": container with ID starting with 8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.521288 4898 scope.go:117] "RemoveContainer" containerID="194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857" Jan 20 04:07:39 crc kubenswrapper[4898]: E0120 04:07:39.521768 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857\": container with ID starting with 194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857 not found: ID does not exist" containerID="194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.521808 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857"} err="failed to get container status \"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857\": rpc error: code = NotFound desc = could not find container \"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857\": container with ID starting with 194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857 not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.521834 4898 scope.go:117] "RemoveContainer" containerID="c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e" Jan 20 04:07:39 crc kubenswrapper[4898]: E0120 04:07:39.522334 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e\": container with ID starting with c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e not found: ID does not exist" containerID="c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.522377 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e"} err="failed to get container status \"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e\": rpc error: code = NotFound desc = could not find container \"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e\": container with ID starting with c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.522404 4898 scope.go:117] "RemoveContainer" containerID="43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.522727 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d"} err="failed to get container status \"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d\": rpc error: code = NotFound desc = could not find container \"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d\": container with ID starting with 43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.522746 4898 scope.go:117] "RemoveContainer" containerID="8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.522961 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f"} err="failed to get container status \"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f\": rpc error: code = NotFound desc = could not find container \"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f\": container with ID starting with 8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.522980 4898 scope.go:117] "RemoveContainer" containerID="194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.523287 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857"} err="failed to get container status \"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857\": rpc error: code = NotFound desc = could not find container \"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857\": container with ID starting with 194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857 not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.523309 4898 scope.go:117] "RemoveContainer" containerID="c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.523668 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e"} err="failed to get container status \"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e\": rpc error: code = NotFound desc = could not find container \"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e\": container with ID starting with c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.523686 4898 scope.go:117] "RemoveContainer" containerID="43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.523945 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d"} err="failed to get container status \"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d\": rpc error: code = NotFound desc = could not find container \"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d\": container with ID starting with 43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.523974 4898 scope.go:117] "RemoveContainer" containerID="8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.524229 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f"} err="failed to get container status \"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f\": rpc error: code = NotFound desc = could not find container \"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f\": container with ID starting with 8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.524248 4898 scope.go:117] "RemoveContainer" containerID="194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.524504 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857"} err="failed to get container status \"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857\": rpc error: code = NotFound desc = could not find container \"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857\": container with ID starting with 194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857 not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.524524 4898 scope.go:117] "RemoveContainer" containerID="c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.524760 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e"} err="failed to get container status \"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e\": rpc error: code = NotFound desc = could not find container \"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e\": container with ID starting with c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.524791 4898 scope.go:117] "RemoveContainer" containerID="43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.529324 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d"} err="failed to get container status \"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d\": rpc error: code = NotFound desc = could not find container \"43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d\": container with ID starting with 43a52043d17aa13c274f66c9fda3a53769945a12ee62dbc6eb4d185594711a7d not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.529349 4898 scope.go:117] "RemoveContainer" containerID="8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.529683 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f"} err="failed to get container status \"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f\": rpc error: code = NotFound desc = could not find container \"8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f\": container with ID starting with 8e9fed692065c22eb917b0d8ec5040a917e6ec2c9e32be088a36038b54c43c5f not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.529702 4898 scope.go:117] "RemoveContainer" containerID="194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.529940 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857"} err="failed to get container status \"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857\": rpc error: code = NotFound desc = could not find container \"194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857\": container with ID starting with 194bc38eb5e2891b657a10fb8e85cce63bfb0697b229ae72f2dced0e50f6d857 not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.529965 4898 scope.go:117] "RemoveContainer" containerID="c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.530165 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e"} err="failed to get container status \"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e\": rpc error: code = NotFound desc = could not find container \"c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e\": container with ID starting with c9f7e1ba40005e86512d64e0ee48ee811359894faaeef7afee5a3a09d466a82e not found: ID does not exist" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.674794 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.691152 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.703389 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:39 crc kubenswrapper[4898]: E0120 04:07:39.703826 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="ceilometer-central-agent" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.703844 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="ceilometer-central-agent" Jan 20 04:07:39 crc kubenswrapper[4898]: E0120 04:07:39.703855 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="ceilometer-notification-agent" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.703861 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="ceilometer-notification-agent" Jan 20 04:07:39 crc kubenswrapper[4898]: E0120 04:07:39.703881 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="sg-core" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.703888 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="sg-core" Jan 20 04:07:39 crc kubenswrapper[4898]: E0120 04:07:39.703899 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="proxy-httpd" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.703905 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="proxy-httpd" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.704086 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="ceilometer-notification-agent" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.704102 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="sg-core" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.704108 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="ceilometer-central-agent" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.704119 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" containerName="proxy-httpd" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.705649 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.713636 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.714371 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.714567 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.737318 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2383b50e-7f1b-458e-ab0e-c1bfc5698227" path="/var/lib/kubelet/pods/2383b50e-7f1b-458e-ab0e-c1bfc5698227/volumes" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.825664 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.825718 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-config-data\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.825775 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-scripts\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.825844 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-run-httpd\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.826541 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-log-httpd\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.826986 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlts8\" (UniqueName: \"kubernetes.io/projected/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-kube-api-access-hlts8\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.827022 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.928827 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-log-httpd\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.928931 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlts8\" (UniqueName: \"kubernetes.io/projected/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-kube-api-access-hlts8\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.928956 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.929024 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.929054 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-config-data\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.929084 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-scripts\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.929121 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-run-httpd\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.929655 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-log-httpd\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.929975 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-run-httpd\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.935594 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.937250 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-config-data\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.949501 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-scripts\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.949587 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:39 crc kubenswrapper[4898]: I0120 04:07:39.951640 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlts8\" (UniqueName: \"kubernetes.io/projected/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-kube-api-access-hlts8\") pod \"ceilometer-0\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " pod="openstack/ceilometer-0" Jan 20 04:07:40 crc kubenswrapper[4898]: I0120 04:07:40.033303 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:40 crc kubenswrapper[4898]: I0120 04:07:40.088086 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:40 crc kubenswrapper[4898]: I0120 04:07:40.341494 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bf226116-7c6f-473d-9ffe-14cb5e7bbdc5","Type":"ContainerStarted","Data":"7ac26e16bdc9348d048c150941ca5f1299d66aa067fe6d4fbcc73978870e1963"} Jan 20 04:07:40 crc kubenswrapper[4898]: I0120 04:07:40.342099 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 20 04:07:40 crc kubenswrapper[4898]: I0120 04:07:40.370870 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.37084916 podStartE2EDuration="3.37084916s" podCreationTimestamp="2026-01-20 04:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:07:40.359248671 +0000 UTC m=+1106.959036530" watchObservedRunningTime="2026-01-20 04:07:40.37084916 +0000 UTC m=+1106.970637019" Jan 20 04:07:40 crc kubenswrapper[4898]: I0120 04:07:40.492918 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:40 crc kubenswrapper[4898]: W0120 04:07:40.505402 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a638d06_7e3e_44f1_8c77_1026b4df8ce4.slice/crio-3d1fa7716f1c5275f36f3818f93b44c82a0e9847a9b1066daa273b4b2051655a WatchSource:0}: Error finding container 3d1fa7716f1c5275f36f3818f93b44c82a0e9847a9b1066daa273b4b2051655a: Status 404 returned error can't find the container with id 3d1fa7716f1c5275f36f3818f93b44c82a0e9847a9b1066daa273b4b2051655a Jan 20 04:07:41 crc kubenswrapper[4898]: I0120 04:07:41.367368 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a638d06-7e3e-44f1-8c77-1026b4df8ce4","Type":"ContainerStarted","Data":"04eb23fee2b23df9195f4add3f90d69ca81328d2ca35cb43ded2c79f1edc6512"} Jan 20 04:07:41 crc kubenswrapper[4898]: I0120 04:07:41.367919 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a638d06-7e3e-44f1-8c77-1026b4df8ce4","Type":"ContainerStarted","Data":"3d1fa7716f1c5275f36f3818f93b44c82a0e9847a9b1066daa273b4b2051655a"} Jan 20 04:07:42 crc kubenswrapper[4898]: I0120 04:07:42.379190 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a638d06-7e3e-44f1-8c77-1026b4df8ce4","Type":"ContainerStarted","Data":"ff4a6c6972e11be7f9fd6c6250ede102ae86bafd4cc9a5277f8a80be6448e289"} Jan 20 04:07:43 crc kubenswrapper[4898]: I0120 04:07:43.391652 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a638d06-7e3e-44f1-8c77-1026b4df8ce4","Type":"ContainerStarted","Data":"ad8210aac40544518ba99a6e8079d564a5ac7c417868ca918914efc7dcb4405b"} Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.194754 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-cnqlc"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.202734 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cnqlc" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.213126 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cnqlc"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.272479 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-tkcqt"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.273859 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tkcqt" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.284108 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tkcqt"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.326684 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc2t5\" (UniqueName: \"kubernetes.io/projected/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-kube-api-access-tc2t5\") pod \"nova-api-db-create-cnqlc\" (UID: \"14f9b463-34e5-4ed0-b31e-0662ea4c09a8\") " pod="openstack/nova-api-db-create-cnqlc" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.326766 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-operator-scripts\") pod \"nova-api-db-create-cnqlc\" (UID: \"14f9b463-34e5-4ed0-b31e-0662ea4c09a8\") " pod="openstack/nova-api-db-create-cnqlc" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.372442 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2815-account-create-update-qr72w"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.373655 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2815-account-create-update-qr72w" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.375311 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.387220 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7jz5l"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.388370 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7jz5l" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.396002 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7jz5l"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.407903 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2815-account-create-update-qr72w"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.430501 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbxqg\" (UniqueName: \"kubernetes.io/projected/08cf8572-b400-41c1-ab44-089877cca867-kube-api-access-tbxqg\") pod \"nova-cell0-db-create-tkcqt\" (UID: \"08cf8572-b400-41c1-ab44-089877cca867\") " pod="openstack/nova-cell0-db-create-tkcqt" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.430856 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cf8572-b400-41c1-ab44-089877cca867-operator-scripts\") pod \"nova-cell0-db-create-tkcqt\" (UID: \"08cf8572-b400-41c1-ab44-089877cca867\") " pod="openstack/nova-cell0-db-create-tkcqt" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.430937 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc2t5\" (UniqueName: \"kubernetes.io/projected/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-kube-api-access-tc2t5\") pod \"nova-api-db-create-cnqlc\" (UID: \"14f9b463-34e5-4ed0-b31e-0662ea4c09a8\") " pod="openstack/nova-api-db-create-cnqlc" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.431054 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-operator-scripts\") pod \"nova-api-db-create-cnqlc\" (UID: \"14f9b463-34e5-4ed0-b31e-0662ea4c09a8\") " pod="openstack/nova-api-db-create-cnqlc" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.431859 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-operator-scripts\") pod \"nova-api-db-create-cnqlc\" (UID: \"14f9b463-34e5-4ed0-b31e-0662ea4c09a8\") " pod="openstack/nova-api-db-create-cnqlc" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.432184 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a638d06-7e3e-44f1-8c77-1026b4df8ce4","Type":"ContainerStarted","Data":"f88c2b696aece4614c3a3006339fbfed7c9d6b5a68c2fca4e5726c96bf38a005"} Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.432336 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="ceilometer-central-agent" containerID="cri-o://04eb23fee2b23df9195f4add3f90d69ca81328d2ca35cb43ded2c79f1edc6512" gracePeriod=30 Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.432393 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.432451 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="proxy-httpd" containerID="cri-o://f88c2b696aece4614c3a3006339fbfed7c9d6b5a68c2fca4e5726c96bf38a005" gracePeriod=30 Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.432510 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="sg-core" containerID="cri-o://ad8210aac40544518ba99a6e8079d564a5ac7c417868ca918914efc7dcb4405b" gracePeriod=30 Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.432545 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="ceilometer-notification-agent" containerID="cri-o://ff4a6c6972e11be7f9fd6c6250ede102ae86bafd4cc9a5277f8a80be6448e289" gracePeriod=30 Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.458532 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc2t5\" (UniqueName: \"kubernetes.io/projected/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-kube-api-access-tc2t5\") pod \"nova-api-db-create-cnqlc\" (UID: \"14f9b463-34e5-4ed0-b31e-0662ea4c09a8\") " pod="openstack/nova-api-db-create-cnqlc" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.477331 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-881c-account-create-update-tcrgw"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.478474 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-881c-account-create-update-tcrgw" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.481300 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.482331 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.152607932 podStartE2EDuration="5.482319487s" podCreationTimestamp="2026-01-20 04:07:39 +0000 UTC" firstStartedPulling="2026-01-20 04:07:40.507181346 +0000 UTC m=+1107.106969205" lastFinishedPulling="2026-01-20 04:07:43.836892861 +0000 UTC m=+1110.436680760" observedRunningTime="2026-01-20 04:07:44.468507608 +0000 UTC m=+1111.068295467" watchObservedRunningTime="2026-01-20 04:07:44.482319487 +0000 UTC m=+1111.082107346" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.497746 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-881c-account-create-update-tcrgw"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.532838 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cnqlc" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.533403 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbxqg\" (UniqueName: \"kubernetes.io/projected/08cf8572-b400-41c1-ab44-089877cca867-kube-api-access-tbxqg\") pod \"nova-cell0-db-create-tkcqt\" (UID: \"08cf8572-b400-41c1-ab44-089877cca867\") " pod="openstack/nova-cell0-db-create-tkcqt" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.533458 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cf8572-b400-41c1-ab44-089877cca867-operator-scripts\") pod \"nova-cell0-db-create-tkcqt\" (UID: \"08cf8572-b400-41c1-ab44-089877cca867\") " pod="openstack/nova-cell0-db-create-tkcqt" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.533565 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mxm\" (UniqueName: \"kubernetes.io/projected/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-kube-api-access-78mxm\") pod \"nova-cell1-db-create-7jz5l\" (UID: \"94ff3424-465b-4cd1-a1b2-f30dfcb68e27\") " pod="openstack/nova-cell1-db-create-7jz5l" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.533623 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-operator-scripts\") pod \"nova-cell1-db-create-7jz5l\" (UID: \"94ff3424-465b-4cd1-a1b2-f30dfcb68e27\") " pod="openstack/nova-cell1-db-create-7jz5l" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.533714 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91d8e86b-0ab5-4724-8213-bd7258c4a124-operator-scripts\") pod \"nova-api-2815-account-create-update-qr72w\" (UID: \"91d8e86b-0ab5-4724-8213-bd7258c4a124\") " pod="openstack/nova-api-2815-account-create-update-qr72w" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.533751 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdqx\" (UniqueName: \"kubernetes.io/projected/91d8e86b-0ab5-4724-8213-bd7258c4a124-kube-api-access-prdqx\") pod \"nova-api-2815-account-create-update-qr72w\" (UID: \"91d8e86b-0ab5-4724-8213-bd7258c4a124\") " pod="openstack/nova-api-2815-account-create-update-qr72w" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.534388 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cf8572-b400-41c1-ab44-089877cca867-operator-scripts\") pod \"nova-cell0-db-create-tkcqt\" (UID: \"08cf8572-b400-41c1-ab44-089877cca867\") " pod="openstack/nova-cell0-db-create-tkcqt" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.567129 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbxqg\" (UniqueName: \"kubernetes.io/projected/08cf8572-b400-41c1-ab44-089877cca867-kube-api-access-tbxqg\") pod \"nova-cell0-db-create-tkcqt\" (UID: \"08cf8572-b400-41c1-ab44-089877cca867\") " pod="openstack/nova-cell0-db-create-tkcqt" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.597993 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tkcqt" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.636111 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78mxm\" (UniqueName: \"kubernetes.io/projected/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-kube-api-access-78mxm\") pod \"nova-cell1-db-create-7jz5l\" (UID: \"94ff3424-465b-4cd1-a1b2-f30dfcb68e27\") " pod="openstack/nova-cell1-db-create-7jz5l" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.636546 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-operator-scripts\") pod \"nova-cell0-881c-account-create-update-tcrgw\" (UID: \"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8\") " pod="openstack/nova-cell0-881c-account-create-update-tcrgw" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.636581 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-operator-scripts\") pod \"nova-cell1-db-create-7jz5l\" (UID: \"94ff3424-465b-4cd1-a1b2-f30dfcb68e27\") " pod="openstack/nova-cell1-db-create-7jz5l" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.637821 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91d8e86b-0ab5-4724-8213-bd7258c4a124-operator-scripts\") pod \"nova-api-2815-account-create-update-qr72w\" (UID: \"91d8e86b-0ab5-4724-8213-bd7258c4a124\") " pod="openstack/nova-api-2815-account-create-update-qr72w" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.637903 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prdqx\" (UniqueName: \"kubernetes.io/projected/91d8e86b-0ab5-4724-8213-bd7258c4a124-kube-api-access-prdqx\") pod \"nova-api-2815-account-create-update-qr72w\" (UID: \"91d8e86b-0ab5-4724-8213-bd7258c4a124\") " pod="openstack/nova-api-2815-account-create-update-qr72w" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.638046 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zj2b\" (UniqueName: \"kubernetes.io/projected/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-kube-api-access-8zj2b\") pod \"nova-cell0-881c-account-create-update-tcrgw\" (UID: \"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8\") " pod="openstack/nova-cell0-881c-account-create-update-tcrgw" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.640605 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91d8e86b-0ab5-4724-8213-bd7258c4a124-operator-scripts\") pod \"nova-api-2815-account-create-update-qr72w\" (UID: \"91d8e86b-0ab5-4724-8213-bd7258c4a124\") " pod="openstack/nova-api-2815-account-create-update-qr72w" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.640906 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-operator-scripts\") pod \"nova-cell1-db-create-7jz5l\" (UID: \"94ff3424-465b-4cd1-a1b2-f30dfcb68e27\") " pod="openstack/nova-cell1-db-create-7jz5l" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.662372 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78mxm\" (UniqueName: \"kubernetes.io/projected/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-kube-api-access-78mxm\") pod \"nova-cell1-db-create-7jz5l\" (UID: \"94ff3424-465b-4cd1-a1b2-f30dfcb68e27\") " pod="openstack/nova-cell1-db-create-7jz5l" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.667330 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prdqx\" (UniqueName: \"kubernetes.io/projected/91d8e86b-0ab5-4724-8213-bd7258c4a124-kube-api-access-prdqx\") pod \"nova-api-2815-account-create-update-qr72w\" (UID: \"91d8e86b-0ab5-4724-8213-bd7258c4a124\") " pod="openstack/nova-api-2815-account-create-update-qr72w" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.678858 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7e33-account-create-update-bkwm9"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.679938 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.683842 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.687722 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7e33-account-create-update-bkwm9"] Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.688076 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2815-account-create-update-qr72w" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.708097 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7jz5l" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.740325 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-operator-scripts\") pod \"nova-cell0-881c-account-create-update-tcrgw\" (UID: \"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8\") " pod="openstack/nova-cell0-881c-account-create-update-tcrgw" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.741133 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zj2b\" (UniqueName: \"kubernetes.io/projected/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-kube-api-access-8zj2b\") pod \"nova-cell0-881c-account-create-update-tcrgw\" (UID: \"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8\") " pod="openstack/nova-cell0-881c-account-create-update-tcrgw" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.741987 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-operator-scripts\") pod \"nova-cell0-881c-account-create-update-tcrgw\" (UID: \"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8\") " pod="openstack/nova-cell0-881c-account-create-update-tcrgw" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.764958 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zj2b\" (UniqueName: \"kubernetes.io/projected/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-kube-api-access-8zj2b\") pod \"nova-cell0-881c-account-create-update-tcrgw\" (UID: \"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8\") " pod="openstack/nova-cell0-881c-account-create-update-tcrgw" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.820801 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-881c-account-create-update-tcrgw" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.843189 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee661f59-12b9-4598-afcc-c20dac6c2694-operator-scripts\") pod \"nova-cell1-7e33-account-create-update-bkwm9\" (UID: \"ee661f59-12b9-4598-afcc-c20dac6c2694\") " pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.843245 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv55x\" (UniqueName: \"kubernetes.io/projected/ee661f59-12b9-4598-afcc-c20dac6c2694-kube-api-access-sv55x\") pod \"nova-cell1-7e33-account-create-update-bkwm9\" (UID: \"ee661f59-12b9-4598-afcc-c20dac6c2694\") " pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.948569 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee661f59-12b9-4598-afcc-c20dac6c2694-operator-scripts\") pod \"nova-cell1-7e33-account-create-update-bkwm9\" (UID: \"ee661f59-12b9-4598-afcc-c20dac6c2694\") " pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.948615 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv55x\" (UniqueName: \"kubernetes.io/projected/ee661f59-12b9-4598-afcc-c20dac6c2694-kube-api-access-sv55x\") pod \"nova-cell1-7e33-account-create-update-bkwm9\" (UID: \"ee661f59-12b9-4598-afcc-c20dac6c2694\") " pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.949617 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee661f59-12b9-4598-afcc-c20dac6c2694-operator-scripts\") pod \"nova-cell1-7e33-account-create-update-bkwm9\" (UID: \"ee661f59-12b9-4598-afcc-c20dac6c2694\") " pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" Jan 20 04:07:44 crc kubenswrapper[4898]: I0120 04:07:44.968377 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv55x\" (UniqueName: \"kubernetes.io/projected/ee661f59-12b9-4598-afcc-c20dac6c2694-kube-api-access-sv55x\") pod \"nova-cell1-7e33-account-create-update-bkwm9\" (UID: \"ee661f59-12b9-4598-afcc-c20dac6c2694\") " pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.016818 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cnqlc"] Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.017029 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.078796 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2815-account-create-update-qr72w"] Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.151598 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tkcqt"] Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.345469 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7jz5l"] Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.435453 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-881c-account-create-update-tcrgw"] Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.460464 4898 generic.go:334] "Generic (PLEG): container finished" podID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerID="f88c2b696aece4614c3a3006339fbfed7c9d6b5a68c2fca4e5726c96bf38a005" exitCode=0 Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.460538 4898 generic.go:334] "Generic (PLEG): container finished" podID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerID="ad8210aac40544518ba99a6e8079d564a5ac7c417868ca918914efc7dcb4405b" exitCode=2 Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.460550 4898 generic.go:334] "Generic (PLEG): container finished" podID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerID="ff4a6c6972e11be7f9fd6c6250ede102ae86bafd4cc9a5277f8a80be6448e289" exitCode=0 Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.460782 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a638d06-7e3e-44f1-8c77-1026b4df8ce4","Type":"ContainerDied","Data":"f88c2b696aece4614c3a3006339fbfed7c9d6b5a68c2fca4e5726c96bf38a005"} Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.460835 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a638d06-7e3e-44f1-8c77-1026b4df8ce4","Type":"ContainerDied","Data":"ad8210aac40544518ba99a6e8079d564a5ac7c417868ca918914efc7dcb4405b"} Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.460848 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a638d06-7e3e-44f1-8c77-1026b4df8ce4","Type":"ContainerDied","Data":"ff4a6c6972e11be7f9fd6c6250ede102ae86bafd4cc9a5277f8a80be6448e289"} Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.468244 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cnqlc" event={"ID":"14f9b463-34e5-4ed0-b31e-0662ea4c09a8","Type":"ContainerStarted","Data":"2fb6613803f2da2019b370243a7c8cf9cc4cd8e21700eec2fc51ff3e9b294af3"} Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.468310 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cnqlc" event={"ID":"14f9b463-34e5-4ed0-b31e-0662ea4c09a8","Type":"ContainerStarted","Data":"d1d0f748e17c8ac4d0ff1ae8a194e60fd9a7ad033251e804e1861bacabd635b4"} Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.470210 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2815-account-create-update-qr72w" event={"ID":"91d8e86b-0ab5-4724-8213-bd7258c4a124","Type":"ContainerStarted","Data":"1f5c599b1399a14317a7907f26476ed9e6086b8439892ac16c02341cc18cc39b"} Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.470508 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2815-account-create-update-qr72w" event={"ID":"91d8e86b-0ab5-4724-8213-bd7258c4a124","Type":"ContainerStarted","Data":"171b80d6edeae55bc0d9fa3c152bee4e48a0444934303198fbfe95c73d5c90ca"} Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.473920 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tkcqt" event={"ID":"08cf8572-b400-41c1-ab44-089877cca867","Type":"ContainerStarted","Data":"8cb5ccf65ed9dc59227727ee8464ee422e8e6e86922ae12c69478da83700cf02"} Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.473951 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tkcqt" event={"ID":"08cf8572-b400-41c1-ab44-089877cca867","Type":"ContainerStarted","Data":"27464d5f247b5b610855141fe2e6a25f1d83bbb8f1336fdce7b3c2b50c4da98f"} Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.479181 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7jz5l" event={"ID":"94ff3424-465b-4cd1-a1b2-f30dfcb68e27","Type":"ContainerStarted","Data":"1d9971afec0ddea71b4050d2acaccce9ba17afdae4cbf8680b6801325e114cd2"} Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.509160 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-cnqlc" podStartSLOduration=1.509127587 podStartE2EDuration="1.509127587s" podCreationTimestamp="2026-01-20 04:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:07:45.499025207 +0000 UTC m=+1112.098813066" watchObservedRunningTime="2026-01-20 04:07:45.509127587 +0000 UTC m=+1112.108915446" Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.530633 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2815-account-create-update-qr72w" podStartSLOduration=1.53060582 podStartE2EDuration="1.53060582s" podCreationTimestamp="2026-01-20 04:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:07:45.51675233 +0000 UTC m=+1112.116540189" watchObservedRunningTime="2026-01-20 04:07:45.53060582 +0000 UTC m=+1112.130393669" Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.548537 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-tkcqt" podStartSLOduration=1.548515337 podStartE2EDuration="1.548515337s" podCreationTimestamp="2026-01-20 04:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:07:45.537754956 +0000 UTC m=+1112.137542815" watchObservedRunningTime="2026-01-20 04:07:45.548515337 +0000 UTC m=+1112.148303196" Jan 20 04:07:45 crc kubenswrapper[4898]: I0120 04:07:45.629977 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7e33-account-create-update-bkwm9"] Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.488228 4898 generic.go:334] "Generic (PLEG): container finished" podID="fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8" containerID="c4e6056e3fc035c334d2a54ddb4f1b2c1f3dcdd4e1dbf191af733f442ba6e716" exitCode=0 Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.488271 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-881c-account-create-update-tcrgw" event={"ID":"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8","Type":"ContainerDied","Data":"c4e6056e3fc035c334d2a54ddb4f1b2c1f3dcdd4e1dbf191af733f442ba6e716"} Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.488642 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-881c-account-create-update-tcrgw" event={"ID":"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8","Type":"ContainerStarted","Data":"cb3258b330e0a8c8fc93804766bcad21128a7a3818faf9f298e07e0a671eb6ed"} Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.491162 4898 generic.go:334] "Generic (PLEG): container finished" podID="14f9b463-34e5-4ed0-b31e-0662ea4c09a8" containerID="2fb6613803f2da2019b370243a7c8cf9cc4cd8e21700eec2fc51ff3e9b294af3" exitCode=0 Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.491307 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cnqlc" event={"ID":"14f9b463-34e5-4ed0-b31e-0662ea4c09a8","Type":"ContainerDied","Data":"2fb6613803f2da2019b370243a7c8cf9cc4cd8e21700eec2fc51ff3e9b294af3"} Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.492973 4898 generic.go:334] "Generic (PLEG): container finished" podID="91d8e86b-0ab5-4724-8213-bd7258c4a124" containerID="1f5c599b1399a14317a7907f26476ed9e6086b8439892ac16c02341cc18cc39b" exitCode=0 Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.493025 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2815-account-create-update-qr72w" event={"ID":"91d8e86b-0ab5-4724-8213-bd7258c4a124","Type":"ContainerDied","Data":"1f5c599b1399a14317a7907f26476ed9e6086b8439892ac16c02341cc18cc39b"} Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.494364 4898 generic.go:334] "Generic (PLEG): container finished" podID="ee661f59-12b9-4598-afcc-c20dac6c2694" containerID="50177f10a0f1802a193266c9cadd5b928de9102eca5edfef8959a0d37e83c456" exitCode=0 Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.494492 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" event={"ID":"ee661f59-12b9-4598-afcc-c20dac6c2694","Type":"ContainerDied","Data":"50177f10a0f1802a193266c9cadd5b928de9102eca5edfef8959a0d37e83c456"} Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.494673 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" event={"ID":"ee661f59-12b9-4598-afcc-c20dac6c2694","Type":"ContainerStarted","Data":"15487e741da489b03d341d6cf92d442ff056177888b59b97b03f24a56217995b"} Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.495867 4898 generic.go:334] "Generic (PLEG): container finished" podID="08cf8572-b400-41c1-ab44-089877cca867" containerID="8cb5ccf65ed9dc59227727ee8464ee422e8e6e86922ae12c69478da83700cf02" exitCode=0 Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.495984 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tkcqt" event={"ID":"08cf8572-b400-41c1-ab44-089877cca867","Type":"ContainerDied","Data":"8cb5ccf65ed9dc59227727ee8464ee422e8e6e86922ae12c69478da83700cf02"} Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.520249 4898 generic.go:334] "Generic (PLEG): container finished" podID="94ff3424-465b-4cd1-a1b2-f30dfcb68e27" containerID="ba3ff2156ec12a6e474e2c779fa2876ed2dc2592699760e0353699fa84fc25d6" exitCode=0 Jan 20 04:07:46 crc kubenswrapper[4898]: I0120 04:07:46.520545 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7jz5l" event={"ID":"94ff3424-465b-4cd1-a1b2-f30dfcb68e27","Type":"ContainerDied","Data":"ba3ff2156ec12a6e474e2c779fa2876ed2dc2592699760e0353699fa84fc25d6"} Jan 20 04:07:47 crc kubenswrapper[4898]: E0120 04:07:47.155317 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded637740_b1d6_464e_9167_010b86294ae0.slice/crio-50201ad7e362c2e8552a2853471560817780bc72df94318e1deebe0b645044f9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d4ffd01_3225_47f8_a88b_00acb1506664.slice/crio-bfdd1d5a4387e807ff7d5396ca7b489ce7003d477e637471fd37fbc28b22f0df\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda170c9ba_92a5_4a91_b4f1_0a6be53941e5.slice/crio-a4c348c6b3c80fb7ef23519492b0f867c80846e0ac9918ea40ee737954eb0299\": RecentStats: unable to find data in memory cache]" Jan 20 04:07:47 crc kubenswrapper[4898]: I0120 04:07:47.984065 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tkcqt" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.122767 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbxqg\" (UniqueName: \"kubernetes.io/projected/08cf8572-b400-41c1-ab44-089877cca867-kube-api-access-tbxqg\") pod \"08cf8572-b400-41c1-ab44-089877cca867\" (UID: \"08cf8572-b400-41c1-ab44-089877cca867\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.122853 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cf8572-b400-41c1-ab44-089877cca867-operator-scripts\") pod \"08cf8572-b400-41c1-ab44-089877cca867\" (UID: \"08cf8572-b400-41c1-ab44-089877cca867\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.124659 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08cf8572-b400-41c1-ab44-089877cca867-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08cf8572-b400-41c1-ab44-089877cca867" (UID: "08cf8572-b400-41c1-ab44-089877cca867"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.132285 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cf8572-b400-41c1-ab44-089877cca867-kube-api-access-tbxqg" (OuterVolumeSpecName: "kube-api-access-tbxqg") pod "08cf8572-b400-41c1-ab44-089877cca867" (UID: "08cf8572-b400-41c1-ab44-089877cca867"). InnerVolumeSpecName "kube-api-access-tbxqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.224614 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbxqg\" (UniqueName: \"kubernetes.io/projected/08cf8572-b400-41c1-ab44-089877cca867-kube-api-access-tbxqg\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.224640 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cf8572-b400-41c1-ab44-089877cca867-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.227699 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7jz5l" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.326027 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78mxm\" (UniqueName: \"kubernetes.io/projected/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-kube-api-access-78mxm\") pod \"94ff3424-465b-4cd1-a1b2-f30dfcb68e27\" (UID: \"94ff3424-465b-4cd1-a1b2-f30dfcb68e27\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.326420 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-operator-scripts\") pod \"94ff3424-465b-4cd1-a1b2-f30dfcb68e27\" (UID: \"94ff3424-465b-4cd1-a1b2-f30dfcb68e27\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.327771 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94ff3424-465b-4cd1-a1b2-f30dfcb68e27" (UID: "94ff3424-465b-4cd1-a1b2-f30dfcb68e27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.343282 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-kube-api-access-78mxm" (OuterVolumeSpecName: "kube-api-access-78mxm") pod "94ff3424-465b-4cd1-a1b2-f30dfcb68e27" (UID: "94ff3424-465b-4cd1-a1b2-f30dfcb68e27"). InnerVolumeSpecName "kube-api-access-78mxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.413661 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.429473 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78mxm\" (UniqueName: \"kubernetes.io/projected/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-kube-api-access-78mxm\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.429512 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ff3424-465b-4cd1-a1b2-f30dfcb68e27-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.430014 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-881c-account-create-update-tcrgw" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.447506 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cnqlc" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.469243 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2815-account-create-update-qr72w" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.532145 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv55x\" (UniqueName: \"kubernetes.io/projected/ee661f59-12b9-4598-afcc-c20dac6c2694-kube-api-access-sv55x\") pod \"ee661f59-12b9-4598-afcc-c20dac6c2694\" (UID: \"ee661f59-12b9-4598-afcc-c20dac6c2694\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.532231 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc2t5\" (UniqueName: \"kubernetes.io/projected/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-kube-api-access-tc2t5\") pod \"14f9b463-34e5-4ed0-b31e-0662ea4c09a8\" (UID: \"14f9b463-34e5-4ed0-b31e-0662ea4c09a8\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.532512 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-operator-scripts\") pod \"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8\" (UID: \"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.532555 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-operator-scripts\") pod \"14f9b463-34e5-4ed0-b31e-0662ea4c09a8\" (UID: \"14f9b463-34e5-4ed0-b31e-0662ea4c09a8\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.532584 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee661f59-12b9-4598-afcc-c20dac6c2694-operator-scripts\") pod \"ee661f59-12b9-4598-afcc-c20dac6c2694\" (UID: \"ee661f59-12b9-4598-afcc-c20dac6c2694\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.532764 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zj2b\" (UniqueName: \"kubernetes.io/projected/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-kube-api-access-8zj2b\") pod \"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8\" (UID: \"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.537683 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee661f59-12b9-4598-afcc-c20dac6c2694-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee661f59-12b9-4598-afcc-c20dac6c2694" (UID: "ee661f59-12b9-4598-afcc-c20dac6c2694"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.537721 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14f9b463-34e5-4ed0-b31e-0662ea4c09a8" (UID: "14f9b463-34e5-4ed0-b31e-0662ea4c09a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.538646 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8" (UID: "fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.538949 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-kube-api-access-8zj2b" (OuterVolumeSpecName: "kube-api-access-8zj2b") pod "fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8" (UID: "fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8"). InnerVolumeSpecName "kube-api-access-8zj2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.543395 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-kube-api-access-tc2t5" (OuterVolumeSpecName: "kube-api-access-tc2t5") pod "14f9b463-34e5-4ed0-b31e-0662ea4c09a8" (UID: "14f9b463-34e5-4ed0-b31e-0662ea4c09a8"). InnerVolumeSpecName "kube-api-access-tc2t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.543789 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee661f59-12b9-4598-afcc-c20dac6c2694-kube-api-access-sv55x" (OuterVolumeSpecName: "kube-api-access-sv55x") pod "ee661f59-12b9-4598-afcc-c20dac6c2694" (UID: "ee661f59-12b9-4598-afcc-c20dac6c2694"). InnerVolumeSpecName "kube-api-access-sv55x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.572405 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tkcqt" event={"ID":"08cf8572-b400-41c1-ab44-089877cca867","Type":"ContainerDied","Data":"27464d5f247b5b610855141fe2e6a25f1d83bbb8f1336fdce7b3c2b50c4da98f"} Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.573077 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27464d5f247b5b610855141fe2e6a25f1d83bbb8f1336fdce7b3c2b50c4da98f" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.573399 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tkcqt" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.587390 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7jz5l" event={"ID":"94ff3424-465b-4cd1-a1b2-f30dfcb68e27","Type":"ContainerDied","Data":"1d9971afec0ddea71b4050d2acaccce9ba17afdae4cbf8680b6801325e114cd2"} Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.587423 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7jz5l" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.587448 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9971afec0ddea71b4050d2acaccce9ba17afdae4cbf8680b6801325e114cd2" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.624186 4898 generic.go:334] "Generic (PLEG): container finished" podID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerID="04eb23fee2b23df9195f4add3f90d69ca81328d2ca35cb43ded2c79f1edc6512" exitCode=0 Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.624471 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a638d06-7e3e-44f1-8c77-1026b4df8ce4","Type":"ContainerDied","Data":"04eb23fee2b23df9195f4add3f90d69ca81328d2ca35cb43ded2c79f1edc6512"} Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.632178 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-881c-account-create-update-tcrgw" event={"ID":"fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8","Type":"ContainerDied","Data":"cb3258b330e0a8c8fc93804766bcad21128a7a3818faf9f298e07e0a671eb6ed"} Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.632229 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb3258b330e0a8c8fc93804766bcad21128a7a3818faf9f298e07e0a671eb6ed" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.632295 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-881c-account-create-update-tcrgw" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.635699 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prdqx\" (UniqueName: \"kubernetes.io/projected/91d8e86b-0ab5-4724-8213-bd7258c4a124-kube-api-access-prdqx\") pod \"91d8e86b-0ab5-4724-8213-bd7258c4a124\" (UID: \"91d8e86b-0ab5-4724-8213-bd7258c4a124\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.635881 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91d8e86b-0ab5-4724-8213-bd7258c4a124-operator-scripts\") pod \"91d8e86b-0ab5-4724-8213-bd7258c4a124\" (UID: \"91d8e86b-0ab5-4724-8213-bd7258c4a124\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.636331 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv55x\" (UniqueName: \"kubernetes.io/projected/ee661f59-12b9-4598-afcc-c20dac6c2694-kube-api-access-sv55x\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.636348 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc2t5\" (UniqueName: \"kubernetes.io/projected/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-kube-api-access-tc2t5\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.636358 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.636366 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14f9b463-34e5-4ed0-b31e-0662ea4c09a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.636375 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee661f59-12b9-4598-afcc-c20dac6c2694-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.636386 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zj2b\" (UniqueName: \"kubernetes.io/projected/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8-kube-api-access-8zj2b\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.636879 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91d8e86b-0ab5-4724-8213-bd7258c4a124-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91d8e86b-0ab5-4724-8213-bd7258c4a124" (UID: "91d8e86b-0ab5-4724-8213-bd7258c4a124"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.639776 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cnqlc" event={"ID":"14f9b463-34e5-4ed0-b31e-0662ea4c09a8","Type":"ContainerDied","Data":"d1d0f748e17c8ac4d0ff1ae8a194e60fd9a7ad033251e804e1861bacabd635b4"} Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.639809 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1d0f748e17c8ac4d0ff1ae8a194e60fd9a7ad033251e804e1861bacabd635b4" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.639881 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cnqlc" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.644921 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d8e86b-0ab5-4724-8213-bd7258c4a124-kube-api-access-prdqx" (OuterVolumeSpecName: "kube-api-access-prdqx") pod "91d8e86b-0ab5-4724-8213-bd7258c4a124" (UID: "91d8e86b-0ab5-4724-8213-bd7258c4a124"). InnerVolumeSpecName "kube-api-access-prdqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.658676 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2815-account-create-update-qr72w" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.659241 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2815-account-create-update-qr72w" event={"ID":"91d8e86b-0ab5-4724-8213-bd7258c4a124","Type":"ContainerDied","Data":"171b80d6edeae55bc0d9fa3c152bee4e48a0444934303198fbfe95c73d5c90ca"} Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.659285 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171b80d6edeae55bc0d9fa3c152bee4e48a0444934303198fbfe95c73d5c90ca" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.666903 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" event={"ID":"ee661f59-12b9-4598-afcc-c20dac6c2694","Type":"ContainerDied","Data":"15487e741da489b03d341d6cf92d442ff056177888b59b97b03f24a56217995b"} Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.666940 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15487e741da489b03d341d6cf92d442ff056177888b59b97b03f24a56217995b" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.666991 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e33-account-create-update-bkwm9" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.684050 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.738253 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prdqx\" (UniqueName: \"kubernetes.io/projected/91d8e86b-0ab5-4724-8213-bd7258c4a124-kube-api-access-prdqx\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.738302 4898 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91d8e86b-0ab5-4724-8213-bd7258c4a124-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.839315 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-config-data\") pod \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.839417 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-combined-ca-bundle\") pod \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.839599 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-scripts\") pod \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.839678 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-run-httpd\") pod \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.839721 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-sg-core-conf-yaml\") pod \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.839913 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-log-httpd\") pod \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.839969 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlts8\" (UniqueName: \"kubernetes.io/projected/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-kube-api-access-hlts8\") pod \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\" (UID: \"4a638d06-7e3e-44f1-8c77-1026b4df8ce4\") " Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.840150 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4a638d06-7e3e-44f1-8c77-1026b4df8ce4" (UID: "4a638d06-7e3e-44f1-8c77-1026b4df8ce4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.840578 4898 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.842409 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4a638d06-7e3e-44f1-8c77-1026b4df8ce4" (UID: "4a638d06-7e3e-44f1-8c77-1026b4df8ce4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.842838 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-scripts" (OuterVolumeSpecName: "scripts") pod "4a638d06-7e3e-44f1-8c77-1026b4df8ce4" (UID: "4a638d06-7e3e-44f1-8c77-1026b4df8ce4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.844394 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-kube-api-access-hlts8" (OuterVolumeSpecName: "kube-api-access-hlts8") pod "4a638d06-7e3e-44f1-8c77-1026b4df8ce4" (UID: "4a638d06-7e3e-44f1-8c77-1026b4df8ce4"). InnerVolumeSpecName "kube-api-access-hlts8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.865513 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4a638d06-7e3e-44f1-8c77-1026b4df8ce4" (UID: "4a638d06-7e3e-44f1-8c77-1026b4df8ce4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.920265 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a638d06-7e3e-44f1-8c77-1026b4df8ce4" (UID: "4a638d06-7e3e-44f1-8c77-1026b4df8ce4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.936635 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-config-data" (OuterVolumeSpecName: "config-data") pod "4a638d06-7e3e-44f1-8c77-1026b4df8ce4" (UID: "4a638d06-7e3e-44f1-8c77-1026b4df8ce4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.942731 4898 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.942772 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlts8\" (UniqueName: \"kubernetes.io/projected/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-kube-api-access-hlts8\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.942786 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.942798 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.942810 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:48 crc kubenswrapper[4898]: I0120 04:07:48.942821 4898 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a638d06-7e3e-44f1-8c77-1026b4df8ce4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.677139 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a638d06-7e3e-44f1-8c77-1026b4df8ce4","Type":"ContainerDied","Data":"3d1fa7716f1c5275f36f3818f93b44c82a0e9847a9b1066daa273b4b2051655a"} Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.677208 4898 scope.go:117] "RemoveContainer" containerID="f88c2b696aece4614c3a3006339fbfed7c9d6b5a68c2fca4e5726c96bf38a005" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.677237 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.710129 4898 scope.go:117] "RemoveContainer" containerID="ad8210aac40544518ba99a6e8079d564a5ac7c417868ca918914efc7dcb4405b" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.712140 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.735358 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.742950 4898 scope.go:117] "RemoveContainer" containerID="ff4a6c6972e11be7f9fd6c6250ede102ae86bafd4cc9a5277f8a80be6448e289" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.744572 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:49 crc kubenswrapper[4898]: E0120 04:07:49.744980 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="ceilometer-central-agent" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.744998 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="ceilometer-central-agent" Jan 20 04:07:49 crc kubenswrapper[4898]: E0120 04:07:49.745011 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d8e86b-0ab5-4724-8213-bd7258c4a124" containerName="mariadb-account-create-update" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745016 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d8e86b-0ab5-4724-8213-bd7258c4a124" containerName="mariadb-account-create-update" Jan 20 04:07:49 crc kubenswrapper[4898]: E0120 04:07:49.745028 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ff3424-465b-4cd1-a1b2-f30dfcb68e27" containerName="mariadb-database-create" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745034 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ff3424-465b-4cd1-a1b2-f30dfcb68e27" containerName="mariadb-database-create" Jan 20 04:07:49 crc kubenswrapper[4898]: E0120 04:07:49.745043 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee661f59-12b9-4598-afcc-c20dac6c2694" containerName="mariadb-account-create-update" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745050 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee661f59-12b9-4598-afcc-c20dac6c2694" containerName="mariadb-account-create-update" Jan 20 04:07:49 crc kubenswrapper[4898]: E0120 04:07:49.745064 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8" containerName="mariadb-account-create-update" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745070 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8" containerName="mariadb-account-create-update" Jan 20 04:07:49 crc kubenswrapper[4898]: E0120 04:07:49.745083 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="sg-core" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745089 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="sg-core" Jan 20 04:07:49 crc kubenswrapper[4898]: E0120 04:07:49.745099 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="proxy-httpd" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745106 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="proxy-httpd" Jan 20 04:07:49 crc kubenswrapper[4898]: E0120 04:07:49.745131 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f9b463-34e5-4ed0-b31e-0662ea4c09a8" containerName="mariadb-database-create" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745137 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f9b463-34e5-4ed0-b31e-0662ea4c09a8" containerName="mariadb-database-create" Jan 20 04:07:49 crc kubenswrapper[4898]: E0120 04:07:49.745149 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cf8572-b400-41c1-ab44-089877cca867" containerName="mariadb-database-create" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745155 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cf8572-b400-41c1-ab44-089877cca867" containerName="mariadb-database-create" Jan 20 04:07:49 crc kubenswrapper[4898]: E0120 04:07:49.745166 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="ceilometer-notification-agent" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745173 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="ceilometer-notification-agent" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745335 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="sg-core" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745349 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee661f59-12b9-4598-afcc-c20dac6c2694" containerName="mariadb-account-create-update" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745359 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="proxy-httpd" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745368 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8" containerName="mariadb-account-create-update" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745382 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="ceilometer-notification-agent" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745391 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" containerName="ceilometer-central-agent" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745397 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cf8572-b400-41c1-ab44-089877cca867" containerName="mariadb-database-create" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745402 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f9b463-34e5-4ed0-b31e-0662ea4c09a8" containerName="mariadb-database-create" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745413 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ff3424-465b-4cd1-a1b2-f30dfcb68e27" containerName="mariadb-database-create" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.745423 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d8e86b-0ab5-4724-8213-bd7258c4a124" containerName="mariadb-account-create-update" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.747069 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.750637 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.750836 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.766407 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.785604 4898 scope.go:117] "RemoveContainer" containerID="04eb23fee2b23df9195f4add3f90d69ca81328d2ca35cb43ded2c79f1edc6512" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.925878 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-log-httpd\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.925926 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4tnt\" (UniqueName: \"kubernetes.io/projected/ec39306b-7399-4403-b765-065a14ee271b-kube-api-access-b4tnt\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.925946 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-config-data\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.925999 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-run-httpd\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.926019 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.926045 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.926071 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-scripts\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.930424 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hk8gg"] Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.931857 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.934269 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.936563 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.936996 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vhf8s" Jan 20 04:07:49 crc kubenswrapper[4898]: I0120 04:07:49.955425 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hk8gg"] Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.026958 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-run-httpd\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.027007 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.027284 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.027398 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-scripts\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.027635 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-log-httpd\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.027703 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4tnt\" (UniqueName: \"kubernetes.io/projected/ec39306b-7399-4403-b765-065a14ee271b-kube-api-access-b4tnt\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.027755 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-config-data\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.027919 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-run-httpd\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.028290 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-log-httpd\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.032164 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.033042 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-config-data\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.034249 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.053032 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4tnt\" (UniqueName: \"kubernetes.io/projected/ec39306b-7399-4403-b765-065a14ee271b-kube-api-access-b4tnt\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.066237 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-scripts\") pod \"ceilometer-0\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.075125 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.129638 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.129950 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdk6v\" (UniqueName: \"kubernetes.io/projected/7bf119bb-2bd0-418f-9b31-df14044054db-kube-api-access-sdk6v\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.130105 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-config-data\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.130170 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-scripts\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.232157 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-config-data\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.232249 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-scripts\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.232308 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.232349 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdk6v\" (UniqueName: \"kubernetes.io/projected/7bf119bb-2bd0-418f-9b31-df14044054db-kube-api-access-sdk6v\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.244482 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-config-data\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.245141 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.248376 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-scripts\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.252723 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdk6v\" (UniqueName: \"kubernetes.io/projected/7bf119bb-2bd0-418f-9b31-df14044054db-kube-api-access-sdk6v\") pod \"nova-cell0-conductor-db-sync-hk8gg\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.338350 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.550332 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.560339 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:50 crc kubenswrapper[4898]: W0120 04:07:50.574173 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec39306b_7399_4403_b765_065a14ee271b.slice/crio-8226f379192482e9088d21101554f709f9bc9e0e216745bc1d64971b27ed6c8c WatchSource:0}: Error finding container 8226f379192482e9088d21101554f709f9bc9e0e216745bc1d64971b27ed6c8c: Status 404 returned error can't find the container with id 8226f379192482e9088d21101554f709f9bc9e0e216745bc1d64971b27ed6c8c Jan 20 04:07:50 crc kubenswrapper[4898]: I0120 04:07:50.687869 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec39306b-7399-4403-b765-065a14ee271b","Type":"ContainerStarted","Data":"8226f379192482e9088d21101554f709f9bc9e0e216745bc1d64971b27ed6c8c"} Jan 20 04:07:51 crc kubenswrapper[4898]: I0120 04:07:51.012760 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hk8gg"] Jan 20 04:07:51 crc kubenswrapper[4898]: W0120 04:07:51.013667 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bf119bb_2bd0_418f_9b31_df14044054db.slice/crio-388ebb6836f094dd21f939ea0639060825c3402c94810c773de0878c401229f9 WatchSource:0}: Error finding container 388ebb6836f094dd21f939ea0639060825c3402c94810c773de0878c401229f9: Status 404 returned error can't find the container with id 388ebb6836f094dd21f939ea0639060825c3402c94810c773de0878c401229f9 Jan 20 04:07:51 crc kubenswrapper[4898]: I0120 04:07:51.708711 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec39306b-7399-4403-b765-065a14ee271b","Type":"ContainerStarted","Data":"d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7"} Jan 20 04:07:51 crc kubenswrapper[4898]: I0120 04:07:51.709807 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hk8gg" event={"ID":"7bf119bb-2bd0-418f-9b31-df14044054db","Type":"ContainerStarted","Data":"388ebb6836f094dd21f939ea0639060825c3402c94810c773de0878c401229f9"} Jan 20 04:07:51 crc kubenswrapper[4898]: I0120 04:07:51.731961 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a638d06-7e3e-44f1-8c77-1026b4df8ce4" path="/var/lib/kubelet/pods/4a638d06-7e3e-44f1-8c77-1026b4df8ce4/volumes" Jan 20 04:07:52 crc kubenswrapper[4898]: I0120 04:07:52.725215 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec39306b-7399-4403-b765-065a14ee271b","Type":"ContainerStarted","Data":"f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f"} Jan 20 04:07:53 crc kubenswrapper[4898]: I0120 04:07:53.135151 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:07:53 crc kubenswrapper[4898]: I0120 04:07:53.742005 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec39306b-7399-4403-b765-065a14ee271b","Type":"ContainerStarted","Data":"1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30"} Jan 20 04:07:57 crc kubenswrapper[4898]: E0120 04:07:57.382278 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d4ffd01_3225_47f8_a88b_00acb1506664.slice/crio-bfdd1d5a4387e807ff7d5396ca7b489ce7003d477e637471fd37fbc28b22f0df\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda170c9ba_92a5_4a91_b4f1_0a6be53941e5.slice/crio-a4c348c6b3c80fb7ef23519492b0f867c80846e0ac9918ea40ee737954eb0299\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded637740_b1d6_464e_9167_010b86294ae0.slice/crio-50201ad7e362c2e8552a2853471560817780bc72df94318e1deebe0b645044f9\": RecentStats: unable to find data in memory cache]" Jan 20 04:07:59 crc kubenswrapper[4898]: I0120 04:07:59.605760 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:07:59 crc kubenswrapper[4898]: I0120 04:07:59.608652 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" containerName="glance-httpd" containerID="cri-o://5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946" gracePeriod=30 Jan 20 04:07:59 crc kubenswrapper[4898]: I0120 04:07:59.606767 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" containerName="glance-log" containerID="cri-o://9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1" gracePeriod=30 Jan 20 04:07:59 crc kubenswrapper[4898]: I0120 04:07:59.799218 4898 generic.go:334] "Generic (PLEG): container finished" podID="001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" containerID="9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1" exitCode=143 Jan 20 04:07:59 crc kubenswrapper[4898]: I0120 04:07:59.799263 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0","Type":"ContainerDied","Data":"9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1"} Jan 20 04:08:00 crc kubenswrapper[4898]: I0120 04:08:00.809647 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hk8gg" event={"ID":"7bf119bb-2bd0-418f-9b31-df14044054db","Type":"ContainerStarted","Data":"7e5df16546237698f944cd41eb647f1590345be5ce7e64a2b97a0dcdc547a879"} Jan 20 04:08:00 crc kubenswrapper[4898]: I0120 04:08:00.811850 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec39306b-7399-4403-b765-065a14ee271b","Type":"ContainerStarted","Data":"30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8"} Jan 20 04:08:00 crc kubenswrapper[4898]: I0120 04:08:00.812128 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 04:08:00 crc kubenswrapper[4898]: I0120 04:08:00.812129 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="proxy-httpd" containerID="cri-o://30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8" gracePeriod=30 Jan 20 04:08:00 crc kubenswrapper[4898]: I0120 04:08:00.812124 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="ceilometer-central-agent" containerID="cri-o://d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7" gracePeriod=30 Jan 20 04:08:00 crc kubenswrapper[4898]: I0120 04:08:00.812160 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="ceilometer-notification-agent" containerID="cri-o://f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f" gracePeriod=30 Jan 20 04:08:00 crc kubenswrapper[4898]: I0120 04:08:00.812174 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="sg-core" containerID="cri-o://1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30" gracePeriod=30 Jan 20 04:08:00 crc kubenswrapper[4898]: I0120 04:08:00.829081 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-hk8gg" podStartSLOduration=3.29369732 podStartE2EDuration="11.829061521s" podCreationTimestamp="2026-01-20 04:07:49 +0000 UTC" firstStartedPulling="2026-01-20 04:07:51.016702048 +0000 UTC m=+1117.616489907" lastFinishedPulling="2026-01-20 04:07:59.552066249 +0000 UTC m=+1126.151854108" observedRunningTime="2026-01-20 04:08:00.824362372 +0000 UTC m=+1127.424150231" watchObservedRunningTime="2026-01-20 04:08:00.829061521 +0000 UTC m=+1127.428849380" Jan 20 04:08:00 crc kubenswrapper[4898]: I0120 04:08:00.854497 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.878997007 podStartE2EDuration="11.854413576s" podCreationTimestamp="2026-01-20 04:07:49 +0000 UTC" firstStartedPulling="2026-01-20 04:07:50.577189088 +0000 UTC m=+1117.176976947" lastFinishedPulling="2026-01-20 04:07:59.552605657 +0000 UTC m=+1126.152393516" observedRunningTime="2026-01-20 04:08:00.843559721 +0000 UTC m=+1127.443347580" watchObservedRunningTime="2026-01-20 04:08:00.854413576 +0000 UTC m=+1127.454201435" Jan 20 04:08:01 crc kubenswrapper[4898]: I0120 04:08:01.052082 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:08:01 crc kubenswrapper[4898]: I0120 04:08:01.052851 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="58b0e573-251e-4529-9dad-55b94fbf1570" containerName="glance-log" containerID="cri-o://32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf" gracePeriod=30 Jan 20 04:08:01 crc kubenswrapper[4898]: I0120 04:08:01.052922 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="58b0e573-251e-4529-9dad-55b94fbf1570" containerName="glance-httpd" containerID="cri-o://3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64" gracePeriod=30 Jan 20 04:08:01 crc kubenswrapper[4898]: I0120 04:08:01.821999 4898 generic.go:334] "Generic (PLEG): container finished" podID="ec39306b-7399-4403-b765-065a14ee271b" containerID="30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8" exitCode=0 Jan 20 04:08:01 crc kubenswrapper[4898]: I0120 04:08:01.822268 4898 generic.go:334] "Generic (PLEG): container finished" podID="ec39306b-7399-4403-b765-065a14ee271b" containerID="1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30" exitCode=2 Jan 20 04:08:01 crc kubenswrapper[4898]: I0120 04:08:01.822277 4898 generic.go:334] "Generic (PLEG): container finished" podID="ec39306b-7399-4403-b765-065a14ee271b" containerID="d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7" exitCode=0 Jan 20 04:08:01 crc kubenswrapper[4898]: I0120 04:08:01.822316 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec39306b-7399-4403-b765-065a14ee271b","Type":"ContainerDied","Data":"30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8"} Jan 20 04:08:01 crc kubenswrapper[4898]: I0120 04:08:01.822344 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec39306b-7399-4403-b765-065a14ee271b","Type":"ContainerDied","Data":"1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30"} Jan 20 04:08:01 crc kubenswrapper[4898]: I0120 04:08:01.822353 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec39306b-7399-4403-b765-065a14ee271b","Type":"ContainerDied","Data":"d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7"} Jan 20 04:08:01 crc kubenswrapper[4898]: I0120 04:08:01.828698 4898 generic.go:334] "Generic (PLEG): container finished" podID="58b0e573-251e-4529-9dad-55b94fbf1570" containerID="32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf" exitCode=143 Jan 20 04:08:01 crc kubenswrapper[4898]: I0120 04:08:01.828793 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58b0e573-251e-4529-9dad-55b94fbf1570","Type":"ContainerDied","Data":"32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf"} Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.246202 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.407775 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-config-data\") pod \"ec39306b-7399-4403-b765-065a14ee271b\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.408411 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-run-httpd\") pod \"ec39306b-7399-4403-b765-065a14ee271b\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.408538 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-scripts\") pod \"ec39306b-7399-4403-b765-065a14ee271b\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.408671 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-combined-ca-bundle\") pod \"ec39306b-7399-4403-b765-065a14ee271b\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.408789 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-log-httpd\") pod \"ec39306b-7399-4403-b765-065a14ee271b\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.408866 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4tnt\" (UniqueName: \"kubernetes.io/projected/ec39306b-7399-4403-b765-065a14ee271b-kube-api-access-b4tnt\") pod \"ec39306b-7399-4403-b765-065a14ee271b\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.408906 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ec39306b-7399-4403-b765-065a14ee271b" (UID: "ec39306b-7399-4403-b765-065a14ee271b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.409000 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-sg-core-conf-yaml\") pod \"ec39306b-7399-4403-b765-065a14ee271b\" (UID: \"ec39306b-7399-4403-b765-065a14ee271b\") " Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.409499 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ec39306b-7399-4403-b765-065a14ee271b" (UID: "ec39306b-7399-4403-b765-065a14ee271b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.410988 4898 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.411077 4898 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec39306b-7399-4403-b765-065a14ee271b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.414768 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec39306b-7399-4403-b765-065a14ee271b-kube-api-access-b4tnt" (OuterVolumeSpecName: "kube-api-access-b4tnt") pod "ec39306b-7399-4403-b765-065a14ee271b" (UID: "ec39306b-7399-4403-b765-065a14ee271b"). InnerVolumeSpecName "kube-api-access-b4tnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.424000 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-scripts" (OuterVolumeSpecName: "scripts") pod "ec39306b-7399-4403-b765-065a14ee271b" (UID: "ec39306b-7399-4403-b765-065a14ee271b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.440364 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ec39306b-7399-4403-b765-065a14ee271b" (UID: "ec39306b-7399-4403-b765-065a14ee271b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.490840 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec39306b-7399-4403-b765-065a14ee271b" (UID: "ec39306b-7399-4403-b765-065a14ee271b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.518282 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.518320 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.518331 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4tnt\" (UniqueName: \"kubernetes.io/projected/ec39306b-7399-4403-b765-065a14ee271b-kube-api-access-b4tnt\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.518339 4898 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.529043 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-config-data" (OuterVolumeSpecName: "config-data") pod "ec39306b-7399-4403-b765-065a14ee271b" (UID: "ec39306b-7399-4403-b765-065a14ee271b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.620903 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec39306b-7399-4403-b765-065a14ee271b-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.839841 4898 generic.go:334] "Generic (PLEG): container finished" podID="ec39306b-7399-4403-b765-065a14ee271b" containerID="f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f" exitCode=0 Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.839897 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec39306b-7399-4403-b765-065a14ee271b","Type":"ContainerDied","Data":"f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f"} Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.839916 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.839935 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec39306b-7399-4403-b765-065a14ee271b","Type":"ContainerDied","Data":"8226f379192482e9088d21101554f709f9bc9e0e216745bc1d64971b27ed6c8c"} Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.839962 4898 scope.go:117] "RemoveContainer" containerID="30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.863422 4898 scope.go:117] "RemoveContainer" containerID="1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.879628 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.892978 4898 scope.go:117] "RemoveContainer" containerID="f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.894820 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.926570 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:08:02 crc kubenswrapper[4898]: E0120 04:08:02.927046 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="ceilometer-central-agent" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.927068 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="ceilometer-central-agent" Jan 20 04:08:02 crc kubenswrapper[4898]: E0120 04:08:02.927108 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="proxy-httpd" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.927117 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="proxy-httpd" Jan 20 04:08:02 crc kubenswrapper[4898]: E0120 04:08:02.927140 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="ceilometer-notification-agent" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.927148 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="ceilometer-notification-agent" Jan 20 04:08:02 crc kubenswrapper[4898]: E0120 04:08:02.927167 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="sg-core" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.927174 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="sg-core" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.927376 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="proxy-httpd" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.927401 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="ceilometer-central-agent" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.927414 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="ceilometer-notification-agent" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.927454 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec39306b-7399-4403-b765-065a14ee271b" containerName="sg-core" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.939634 4898 scope.go:117] "RemoveContainer" containerID="d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.942857 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.950706 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.951095 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 04:08:02 crc kubenswrapper[4898]: I0120 04:08:02.959668 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.023673 4898 scope.go:117] "RemoveContainer" containerID="30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8" Jan 20 04:08:03 crc kubenswrapper[4898]: E0120 04:08:03.027608 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8\": container with ID starting with 30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8 not found: ID does not exist" containerID="30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.027664 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8"} err="failed to get container status \"30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8\": rpc error: code = NotFound desc = could not find container \"30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8\": container with ID starting with 30906ec6298859a1ffc5f4a2f4a7b1722b8fc50a4d82e22876be65914592ffa8 not found: ID does not exist" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.027697 4898 scope.go:117] "RemoveContainer" containerID="1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30" Jan 20 04:08:03 crc kubenswrapper[4898]: E0120 04:08:03.028152 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30\": container with ID starting with 1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30 not found: ID does not exist" containerID="1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.028184 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30"} err="failed to get container status \"1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30\": rpc error: code = NotFound desc = could not find container \"1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30\": container with ID starting with 1c7ef7e6b7648c370c7c41e878f8e3ac771748cd0dcf544c11af9e71306a4a30 not found: ID does not exist" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.028200 4898 scope.go:117] "RemoveContainer" containerID="f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f" Jan 20 04:08:03 crc kubenswrapper[4898]: E0120 04:08:03.028588 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f\": container with ID starting with f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f not found: ID does not exist" containerID="f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.028635 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f"} err="failed to get container status \"f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f\": rpc error: code = NotFound desc = could not find container \"f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f\": container with ID starting with f0e1181ea748083eeed694d6f93ff6550a998a847aac73e4a6edf2585a466a0f not found: ID does not exist" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.028666 4898 scope.go:117] "RemoveContainer" containerID="d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7" Jan 20 04:08:03 crc kubenswrapper[4898]: E0120 04:08:03.030724 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7\": container with ID starting with d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7 not found: ID does not exist" containerID="d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.030759 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7"} err="failed to get container status \"d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7\": rpc error: code = NotFound desc = could not find container \"d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7\": container with ID starting with d8b4d4192a5003a2f36cd00d232bca858463e7fc4c73bf3ec83e60587ef864c7 not found: ID does not exist" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.130083 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.130135 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5czp\" (UniqueName: \"kubernetes.io/projected/c9599b5a-abed-4816-ac11-d54cf903104c-kube-api-access-z5czp\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.130172 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.130199 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-log-httpd\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.130260 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-config-data\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.130291 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-scripts\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.130346 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-run-httpd\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.232403 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.232484 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5czp\" (UniqueName: \"kubernetes.io/projected/c9599b5a-abed-4816-ac11-d54cf903104c-kube-api-access-z5czp\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.232550 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.232579 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-log-httpd\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.232635 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-config-data\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.232667 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-scripts\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.232700 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-run-httpd\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.233218 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-run-httpd\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.233405 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-log-httpd\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.238174 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.238273 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-scripts\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.238807 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.240317 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-config-data\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.248909 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5czp\" (UniqueName: \"kubernetes.io/projected/c9599b5a-abed-4816-ac11-d54cf903104c-kube-api-access-z5czp\") pod \"ceilometer-0\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.375966 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.437838 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.537305 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-public-tls-certs\") pod \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.537356 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2stx\" (UniqueName: \"kubernetes.io/projected/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-kube-api-access-l2stx\") pod \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.537387 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.537445 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-combined-ca-bundle\") pod \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.537563 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-scripts\") pod \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.537585 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-config-data\") pod \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.537653 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-logs\") pod \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.537676 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-httpd-run\") pod \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\" (UID: \"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0\") " Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.539176 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" (UID: "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.548182 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-logs" (OuterVolumeSpecName: "logs") pod "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" (UID: "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.548819 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-scripts" (OuterVolumeSpecName: "scripts") pod "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" (UID: "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.550209 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-kube-api-access-l2stx" (OuterVolumeSpecName: "kube-api-access-l2stx") pod "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" (UID: "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0"). InnerVolumeSpecName "kube-api-access-l2stx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.550927 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" (UID: "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.570958 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" (UID: "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.602693 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" (UID: "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.615010 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-config-data" (OuterVolumeSpecName: "config-data") pod "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" (UID: "001e1f5f-83b1-4e1e-9d1c-cbdf883824a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.643054 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.643086 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.643096 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.643106 4898 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.643113 4898 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.643124 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2stx\" (UniqueName: \"kubernetes.io/projected/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-kube-api-access-l2stx\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.643151 4898 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.643160 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.664308 4898 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.734201 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec39306b-7399-4403-b765-065a14ee271b" path="/var/lib/kubelet/pods/ec39306b-7399-4403-b765-065a14ee271b/volumes" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.745413 4898 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.855448 4898 generic.go:334] "Generic (PLEG): container finished" podID="001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" containerID="5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946" exitCode=0 Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.855501 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.855521 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0","Type":"ContainerDied","Data":"5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946"} Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.856609 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"001e1f5f-83b1-4e1e-9d1c-cbdf883824a0","Type":"ContainerDied","Data":"4f2a9810b4851adfb4fab3c526794f33902e5327eb6591c2d150e8fe40882587"} Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.856630 4898 scope.go:117] "RemoveContainer" containerID="5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946" Jan 20 04:08:03 crc kubenswrapper[4898]: W0120 04:08:03.880361 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9599b5a_abed_4816_ac11_d54cf903104c.slice/crio-5abf0b6702bc0849e75f8236517ffe3f193d76aa3945b02b29a55f7fac6af163 WatchSource:0}: Error finding container 5abf0b6702bc0849e75f8236517ffe3f193d76aa3945b02b29a55f7fac6af163: Status 404 returned error can't find the container with id 5abf0b6702bc0849e75f8236517ffe3f193d76aa3945b02b29a55f7fac6af163 Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.882848 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.891127 4898 scope.go:117] "RemoveContainer" containerID="9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.916828 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.925478 4898 scope.go:117] "RemoveContainer" containerID="5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946" Jan 20 04:08:03 crc kubenswrapper[4898]: E0120 04:08:03.925931 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946\": container with ID starting with 5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946 not found: ID does not exist" containerID="5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.925963 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946"} err="failed to get container status \"5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946\": rpc error: code = NotFound desc = could not find container \"5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946\": container with ID starting with 5b2e0db8f00093191a07941f7d40c96d9bd783d4b32402e27e5172e2142b3946 not found: ID does not exist" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.925986 4898 scope.go:117] "RemoveContainer" containerID="9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1" Jan 20 04:08:03 crc kubenswrapper[4898]: E0120 04:08:03.926345 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1\": container with ID starting with 9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1 not found: ID does not exist" containerID="9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.926417 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1"} err="failed to get container status \"9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1\": rpc error: code = NotFound desc = could not find container \"9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1\": container with ID starting with 9dc74ce75dc5d6802405c4256566bcb08eee18bbfec887c3ea013985ab8c1ff1 not found: ID does not exist" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.934796 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.945303 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:08:03 crc kubenswrapper[4898]: E0120 04:08:03.946088 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" containerName="glance-httpd" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.946117 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" containerName="glance-httpd" Jan 20 04:08:03 crc kubenswrapper[4898]: E0120 04:08:03.946141 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" containerName="glance-log" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.946150 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" containerName="glance-log" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.946455 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" containerName="glance-log" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.946506 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" containerName="glance-httpd" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.947696 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.952419 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.953075 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 20 04:08:03 crc kubenswrapper[4898]: I0120 04:08:03.953280 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.052827 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878e8a52-a939-4b57-b229-d72049650611-logs\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.052874 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4xvs\" (UniqueName: \"kubernetes.io/projected/878e8a52-a939-4b57-b229-d72049650611-kube-api-access-g4xvs\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.053191 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-scripts\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.053253 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.053291 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.053334 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/878e8a52-a939-4b57-b229-d72049650611-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.053575 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-config-data\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.053653 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.155349 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-config-data\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.155466 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.155517 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878e8a52-a939-4b57-b229-d72049650611-logs\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.155540 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4xvs\" (UniqueName: \"kubernetes.io/projected/878e8a52-a939-4b57-b229-d72049650611-kube-api-access-g4xvs\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.155613 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-scripts\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.155638 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.155974 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.156087 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878e8a52-a939-4b57-b229-d72049650611-logs\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.156428 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.156488 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/878e8a52-a939-4b57-b229-d72049650611-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.156855 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/878e8a52-a939-4b57-b229-d72049650611-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.162117 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.164026 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-config-data\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.164138 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.164612 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/878e8a52-a939-4b57-b229-d72049650611-scripts\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.178020 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4xvs\" (UniqueName: \"kubernetes.io/projected/878e8a52-a939-4b57-b229-d72049650611-kube-api-access-g4xvs\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.204209 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"878e8a52-a939-4b57-b229-d72049650611\") " pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.263960 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.758266 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.868988 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-httpd-run\") pod \"58b0e573-251e-4529-9dad-55b94fbf1570\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.869049 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmgqm\" (UniqueName: \"kubernetes.io/projected/58b0e573-251e-4529-9dad-55b94fbf1570-kube-api-access-zmgqm\") pod \"58b0e573-251e-4529-9dad-55b94fbf1570\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.869188 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"58b0e573-251e-4529-9dad-55b94fbf1570\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.869272 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-config-data\") pod \"58b0e573-251e-4529-9dad-55b94fbf1570\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.869325 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-combined-ca-bundle\") pod \"58b0e573-251e-4529-9dad-55b94fbf1570\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.869344 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-scripts\") pod \"58b0e573-251e-4529-9dad-55b94fbf1570\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.869367 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-internal-tls-certs\") pod \"58b0e573-251e-4529-9dad-55b94fbf1570\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.869388 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-logs\") pod \"58b0e573-251e-4529-9dad-55b94fbf1570\" (UID: \"58b0e573-251e-4529-9dad-55b94fbf1570\") " Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.870382 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "58b0e573-251e-4529-9dad-55b94fbf1570" (UID: "58b0e573-251e-4529-9dad-55b94fbf1570"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.871646 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-logs" (OuterVolumeSpecName: "logs") pod "58b0e573-251e-4529-9dad-55b94fbf1570" (UID: "58b0e573-251e-4529-9dad-55b94fbf1570"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.872914 4898 generic.go:334] "Generic (PLEG): container finished" podID="58b0e573-251e-4529-9dad-55b94fbf1570" containerID="3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64" exitCode=0 Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.872985 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58b0e573-251e-4529-9dad-55b94fbf1570","Type":"ContainerDied","Data":"3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64"} Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.873018 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58b0e573-251e-4529-9dad-55b94fbf1570","Type":"ContainerDied","Data":"cd3a3904edfe0bd20b4c8f66d52fa22ac06be4f509831b7c9d69dfb8a95c4e6d"} Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.873039 4898 scope.go:117] "RemoveContainer" containerID="3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.873687 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.874930 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b0e573-251e-4529-9dad-55b94fbf1570-kube-api-access-zmgqm" (OuterVolumeSpecName: "kube-api-access-zmgqm") pod "58b0e573-251e-4529-9dad-55b94fbf1570" (UID: "58b0e573-251e-4529-9dad-55b94fbf1570"). InnerVolumeSpecName "kube-api-access-zmgqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.875501 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.875681 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9599b5a-abed-4816-ac11-d54cf903104c","Type":"ContainerStarted","Data":"5abf0b6702bc0849e75f8236517ffe3f193d76aa3945b02b29a55f7fac6af163"} Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.876272 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "58b0e573-251e-4529-9dad-55b94fbf1570" (UID: "58b0e573-251e-4529-9dad-55b94fbf1570"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.877653 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-scripts" (OuterVolumeSpecName: "scripts") pod "58b0e573-251e-4529-9dad-55b94fbf1570" (UID: "58b0e573-251e-4529-9dad-55b94fbf1570"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.899817 4898 scope.go:117] "RemoveContainer" containerID="32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.905819 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58b0e573-251e-4529-9dad-55b94fbf1570" (UID: "58b0e573-251e-4529-9dad-55b94fbf1570"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.931876 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "58b0e573-251e-4529-9dad-55b94fbf1570" (UID: "58b0e573-251e-4529-9dad-55b94fbf1570"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.934666 4898 scope.go:117] "RemoveContainer" containerID="3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64" Jan 20 04:08:04 crc kubenswrapper[4898]: E0120 04:08:04.935450 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64\": container with ID starting with 3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64 not found: ID does not exist" containerID="3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.935501 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64"} err="failed to get container status \"3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64\": rpc error: code = NotFound desc = could not find container \"3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64\": container with ID starting with 3a3dea58c960c98a4f973ed8d83158bd58416596bd0efee36fe7fb6196d18f64 not found: ID does not exist" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.935534 4898 scope.go:117] "RemoveContainer" containerID="32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf" Jan 20 04:08:04 crc kubenswrapper[4898]: E0120 04:08:04.935891 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf\": container with ID starting with 32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf not found: ID does not exist" containerID="32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.936007 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf"} err="failed to get container status \"32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf\": rpc error: code = NotFound desc = could not find container \"32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf\": container with ID starting with 32d349f172f969c6c27e6a3705c34b66202b0a06e49728a1b5dcdaa43ace01cf not found: ID does not exist" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.946634 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-config-data" (OuterVolumeSpecName: "config-data") pod "58b0e573-251e-4529-9dad-55b94fbf1570" (UID: "58b0e573-251e-4529-9dad-55b94fbf1570"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.971925 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.971965 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.971977 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.971985 4898 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b0e573-251e-4529-9dad-55b94fbf1570-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.971993 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.972001 4898 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58b0e573-251e-4529-9dad-55b94fbf1570-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.972009 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmgqm\" (UniqueName: \"kubernetes.io/projected/58b0e573-251e-4529-9dad-55b94fbf1570-kube-api-access-zmgqm\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.972043 4898 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 20 04:08:04 crc kubenswrapper[4898]: I0120 04:08:04.991208 4898 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.073287 4898 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.226887 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.236358 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.259805 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:08:05 crc kubenswrapper[4898]: E0120 04:08:05.260288 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b0e573-251e-4529-9dad-55b94fbf1570" containerName="glance-log" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.260313 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b0e573-251e-4529-9dad-55b94fbf1570" containerName="glance-log" Jan 20 04:08:05 crc kubenswrapper[4898]: E0120 04:08:05.260331 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b0e573-251e-4529-9dad-55b94fbf1570" containerName="glance-httpd" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.260341 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b0e573-251e-4529-9dad-55b94fbf1570" containerName="glance-httpd" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.260628 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b0e573-251e-4529-9dad-55b94fbf1570" containerName="glance-log" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.260693 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b0e573-251e-4529-9dad-55b94fbf1570" containerName="glance-httpd" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.261959 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.263464 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.266940 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.275318 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.379565 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.379614 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.379639 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.379863 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.380018 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db5d1c2c-5f41-4b41-8503-72518de5ba3a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.380045 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db5d1c2c-5f41-4b41-8503-72518de5ba3a-logs\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.380141 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2wm8\" (UniqueName: \"kubernetes.io/projected/db5d1c2c-5f41-4b41-8503-72518de5ba3a-kube-api-access-r2wm8\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.380214 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.482160 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2wm8\" (UniqueName: \"kubernetes.io/projected/db5d1c2c-5f41-4b41-8503-72518de5ba3a-kube-api-access-r2wm8\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.482479 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.482525 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.482546 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.482568 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.482618 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.482664 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db5d1c2c-5f41-4b41-8503-72518de5ba3a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.482683 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db5d1c2c-5f41-4b41-8503-72518de5ba3a-logs\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.482900 4898 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.483138 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db5d1c2c-5f41-4b41-8503-72518de5ba3a-logs\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.483396 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db5d1c2c-5f41-4b41-8503-72518de5ba3a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.490532 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.493042 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.493712 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.494047 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5d1c2c-5f41-4b41-8503-72518de5ba3a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.502136 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2wm8\" (UniqueName: \"kubernetes.io/projected/db5d1c2c-5f41-4b41-8503-72518de5ba3a-kube-api-access-r2wm8\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.517674 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"db5d1c2c-5f41-4b41-8503-72518de5ba3a\") " pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.636579 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.735220 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="001e1f5f-83b1-4e1e-9d1c-cbdf883824a0" path="/var/lib/kubelet/pods/001e1f5f-83b1-4e1e-9d1c-cbdf883824a0/volumes" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.736087 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b0e573-251e-4529-9dad-55b94fbf1570" path="/var/lib/kubelet/pods/58b0e573-251e-4529-9dad-55b94fbf1570/volumes" Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.896455 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9599b5a-abed-4816-ac11-d54cf903104c","Type":"ContainerStarted","Data":"01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8"} Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.900007 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"878e8a52-a939-4b57-b229-d72049650611","Type":"ContainerStarted","Data":"ed760438ec587ccadc785bde437a1e50f856794ad1498f00bc1916a4ebdf1512"} Jan 20 04:08:05 crc kubenswrapper[4898]: I0120 04:08:05.900037 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"878e8a52-a939-4b57-b229-d72049650611","Type":"ContainerStarted","Data":"31a1e5450342d5c66a46fb3f3092e359584f411102b1fc0ba009130fd4686b83"} Jan 20 04:08:06 crc kubenswrapper[4898]: I0120 04:08:06.210128 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 04:08:06 crc kubenswrapper[4898]: W0120 04:08:06.212815 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb5d1c2c_5f41_4b41_8503_72518de5ba3a.slice/crio-ab0aacd7388773fed73827c66fc3c22d9da3a37b393a301ca2443b7d6d1ee909 WatchSource:0}: Error finding container ab0aacd7388773fed73827c66fc3c22d9da3a37b393a301ca2443b7d6d1ee909: Status 404 returned error can't find the container with id ab0aacd7388773fed73827c66fc3c22d9da3a37b393a301ca2443b7d6d1ee909 Jan 20 04:08:06 crc kubenswrapper[4898]: I0120 04:08:06.916558 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db5d1c2c-5f41-4b41-8503-72518de5ba3a","Type":"ContainerStarted","Data":"fa4c0b4ebd3d5bf9c20352b60226e238dfe9f28f72678045e9749d73d2a7d343"} Jan 20 04:08:06 crc kubenswrapper[4898]: I0120 04:08:06.916876 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db5d1c2c-5f41-4b41-8503-72518de5ba3a","Type":"ContainerStarted","Data":"ab0aacd7388773fed73827c66fc3c22d9da3a37b393a301ca2443b7d6d1ee909"} Jan 20 04:08:06 crc kubenswrapper[4898]: I0120 04:08:06.926846 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9599b5a-abed-4816-ac11-d54cf903104c","Type":"ContainerStarted","Data":"1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942"} Jan 20 04:08:06 crc kubenswrapper[4898]: I0120 04:08:06.926892 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9599b5a-abed-4816-ac11-d54cf903104c","Type":"ContainerStarted","Data":"75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56"} Jan 20 04:08:06 crc kubenswrapper[4898]: I0120 04:08:06.930090 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"878e8a52-a939-4b57-b229-d72049650611","Type":"ContainerStarted","Data":"a84c23d3736675e25d19e48fc78f73e06b4d8466121bf8979ab22a98634c1c7f"} Jan 20 04:08:06 crc kubenswrapper[4898]: I0120 04:08:06.960978 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.960953297 podStartE2EDuration="3.960953297s" podCreationTimestamp="2026-01-20 04:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:08:06.947635974 +0000 UTC m=+1133.547423833" watchObservedRunningTime="2026-01-20 04:08:06.960953297 +0000 UTC m=+1133.560741156" Jan 20 04:08:07 crc kubenswrapper[4898]: E0120 04:08:07.612485 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded637740_b1d6_464e_9167_010b86294ae0.slice/crio-50201ad7e362c2e8552a2853471560817780bc72df94318e1deebe0b645044f9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d4ffd01_3225_47f8_a88b_00acb1506664.slice/crio-bfdd1d5a4387e807ff7d5396ca7b489ce7003d477e637471fd37fbc28b22f0df\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda170c9ba_92a5_4a91_b4f1_0a6be53941e5.slice/crio-a4c348c6b3c80fb7ef23519492b0f867c80846e0ac9918ea40ee737954eb0299\": RecentStats: unable to find data in memory cache]" Jan 20 04:08:07 crc kubenswrapper[4898]: I0120 04:08:07.948585 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db5d1c2c-5f41-4b41-8503-72518de5ba3a","Type":"ContainerStarted","Data":"47593827a97169d3de72bf15c5bc4d2e23f81ae42002fba1139a593d545c0c70"} Jan 20 04:08:07 crc kubenswrapper[4898]: I0120 04:08:07.990935 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.990899187 podStartE2EDuration="2.990899187s" podCreationTimestamp="2026-01-20 04:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:08:07.975867139 +0000 UTC m=+1134.575654998" watchObservedRunningTime="2026-01-20 04:08:07.990899187 +0000 UTC m=+1134.590687086" Jan 20 04:08:08 crc kubenswrapper[4898]: I0120 04:08:08.960874 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9599b5a-abed-4816-ac11-d54cf903104c","Type":"ContainerStarted","Data":"8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0"} Jan 20 04:08:08 crc kubenswrapper[4898]: I0120 04:08:08.983128 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.438928577 podStartE2EDuration="6.983104149s" podCreationTimestamp="2026-01-20 04:08:02 +0000 UTC" firstStartedPulling="2026-01-20 04:08:03.891409869 +0000 UTC m=+1130.491197728" lastFinishedPulling="2026-01-20 04:08:08.435585401 +0000 UTC m=+1135.035373300" observedRunningTime="2026-01-20 04:08:08.977930365 +0000 UTC m=+1135.577718224" watchObservedRunningTime="2026-01-20 04:08:08.983104149 +0000 UTC m=+1135.582892008" Jan 20 04:08:09 crc kubenswrapper[4898]: I0120 04:08:09.970785 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 04:08:14 crc kubenswrapper[4898]: I0120 04:08:14.265719 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 20 04:08:14 crc kubenswrapper[4898]: I0120 04:08:14.266396 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 20 04:08:14 crc kubenswrapper[4898]: I0120 04:08:14.303314 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 20 04:08:14 crc kubenswrapper[4898]: I0120 04:08:14.337023 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 20 04:08:15 crc kubenswrapper[4898]: I0120 04:08:15.027967 4898 generic.go:334] "Generic (PLEG): container finished" podID="7bf119bb-2bd0-418f-9b31-df14044054db" containerID="7e5df16546237698f944cd41eb647f1590345be5ce7e64a2b97a0dcdc547a879" exitCode=0 Jan 20 04:08:15 crc kubenswrapper[4898]: I0120 04:08:15.028041 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hk8gg" event={"ID":"7bf119bb-2bd0-418f-9b31-df14044054db","Type":"ContainerDied","Data":"7e5df16546237698f944cd41eb647f1590345be5ce7e64a2b97a0dcdc547a879"} Jan 20 04:08:15 crc kubenswrapper[4898]: I0120 04:08:15.029640 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 20 04:08:15 crc kubenswrapper[4898]: I0120 04:08:15.029709 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 20 04:08:15 crc kubenswrapper[4898]: I0120 04:08:15.638022 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:15 crc kubenswrapper[4898]: I0120 04:08:15.638089 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:15 crc kubenswrapper[4898]: I0120 04:08:15.693890 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:15 crc kubenswrapper[4898]: I0120 04:08:15.694239 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.043342 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.043411 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.438812 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.607315 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-combined-ca-bundle\") pod \"7bf119bb-2bd0-418f-9b31-df14044054db\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.607380 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdk6v\" (UniqueName: \"kubernetes.io/projected/7bf119bb-2bd0-418f-9b31-df14044054db-kube-api-access-sdk6v\") pod \"7bf119bb-2bd0-418f-9b31-df14044054db\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.607492 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-scripts\") pod \"7bf119bb-2bd0-418f-9b31-df14044054db\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.607582 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-config-data\") pod \"7bf119bb-2bd0-418f-9b31-df14044054db\" (UID: \"7bf119bb-2bd0-418f-9b31-df14044054db\") " Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.612418 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-scripts" (OuterVolumeSpecName: "scripts") pod "7bf119bb-2bd0-418f-9b31-df14044054db" (UID: "7bf119bb-2bd0-418f-9b31-df14044054db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.623515 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bf119bb-2bd0-418f-9b31-df14044054db-kube-api-access-sdk6v" (OuterVolumeSpecName: "kube-api-access-sdk6v") pod "7bf119bb-2bd0-418f-9b31-df14044054db" (UID: "7bf119bb-2bd0-418f-9b31-df14044054db"). InnerVolumeSpecName "kube-api-access-sdk6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.633798 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bf119bb-2bd0-418f-9b31-df14044054db" (UID: "7bf119bb-2bd0-418f-9b31-df14044054db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.659645 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-config-data" (OuterVolumeSpecName: "config-data") pod "7bf119bb-2bd0-418f-9b31-df14044054db" (UID: "7bf119bb-2bd0-418f-9b31-df14044054db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.710105 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.710170 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.710182 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf119bb-2bd0-418f-9b31-df14044054db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.710195 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdk6v\" (UniqueName: \"kubernetes.io/projected/7bf119bb-2bd0-418f-9b31-df14044054db-kube-api-access-sdk6v\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.917839 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 20 04:08:16 crc kubenswrapper[4898]: I0120 04:08:16.919679 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.051468 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hk8gg" event={"ID":"7bf119bb-2bd0-418f-9b31-df14044054db","Type":"ContainerDied","Data":"388ebb6836f094dd21f939ea0639060825c3402c94810c773de0878c401229f9"} Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.051513 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388ebb6836f094dd21f939ea0639060825c3402c94810c773de0878c401229f9" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.051747 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hk8gg" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.162440 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 04:08:17 crc kubenswrapper[4898]: E0120 04:08:17.162809 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf119bb-2bd0-418f-9b31-df14044054db" containerName="nova-cell0-conductor-db-sync" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.162826 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf119bb-2bd0-418f-9b31-df14044054db" containerName="nova-cell0-conductor-db-sync" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.163022 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bf119bb-2bd0-418f-9b31-df14044054db" containerName="nova-cell0-conductor-db-sync" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.163611 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.165294 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vhf8s" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.165673 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.173103 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.320598 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d04727f-3596-48ce-a3d5-d4d0deb8fe89-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7d04727f-3596-48ce-a3d5-d4d0deb8fe89\") " pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.320872 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbxh9\" (UniqueName: \"kubernetes.io/projected/7d04727f-3596-48ce-a3d5-d4d0deb8fe89-kube-api-access-rbxh9\") pod \"nova-cell0-conductor-0\" (UID: \"7d04727f-3596-48ce-a3d5-d4d0deb8fe89\") " pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.321044 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d04727f-3596-48ce-a3d5-d4d0deb8fe89-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7d04727f-3596-48ce-a3d5-d4d0deb8fe89\") " pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.422689 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d04727f-3596-48ce-a3d5-d4d0deb8fe89-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7d04727f-3596-48ce-a3d5-d4d0deb8fe89\") " pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.422813 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbxh9\" (UniqueName: \"kubernetes.io/projected/7d04727f-3596-48ce-a3d5-d4d0deb8fe89-kube-api-access-rbxh9\") pod \"nova-cell0-conductor-0\" (UID: \"7d04727f-3596-48ce-a3d5-d4d0deb8fe89\") " pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.422880 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d04727f-3596-48ce-a3d5-d4d0deb8fe89-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7d04727f-3596-48ce-a3d5-d4d0deb8fe89\") " pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.427257 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d04727f-3596-48ce-a3d5-d4d0deb8fe89-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7d04727f-3596-48ce-a3d5-d4d0deb8fe89\") " pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.427733 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d04727f-3596-48ce-a3d5-d4d0deb8fe89-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7d04727f-3596-48ce-a3d5-d4d0deb8fe89\") " pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.451261 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbxh9\" (UniqueName: \"kubernetes.io/projected/7d04727f-3596-48ce-a3d5-d4d0deb8fe89-kube-api-access-rbxh9\") pod \"nova-cell0-conductor-0\" (UID: \"7d04727f-3596-48ce-a3d5-d4d0deb8fe89\") " pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.479872 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.952823 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 04:08:17 crc kubenswrapper[4898]: W0120 04:08:17.955555 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d04727f_3596_48ce_a3d5_d4d0deb8fe89.slice/crio-bc11a57b487d5bcafe6fb28acfca0f1d5d5421d2202ab8a206a7ca22d8584fac WatchSource:0}: Error finding container bc11a57b487d5bcafe6fb28acfca0f1d5d5421d2202ab8a206a7ca22d8584fac: Status 404 returned error can't find the container with id bc11a57b487d5bcafe6fb28acfca0f1d5d5421d2202ab8a206a7ca22d8584fac Jan 20 04:08:17 crc kubenswrapper[4898]: I0120 04:08:17.985334 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:18 crc kubenswrapper[4898]: I0120 04:08:18.066236 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7d04727f-3596-48ce-a3d5-d4d0deb8fe89","Type":"ContainerStarted","Data":"bc11a57b487d5bcafe6fb28acfca0f1d5d5421d2202ab8a206a7ca22d8584fac"} Jan 20 04:08:18 crc kubenswrapper[4898]: I0120 04:08:18.066341 4898 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 04:08:18 crc kubenswrapper[4898]: I0120 04:08:18.104757 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 20 04:08:19 crc kubenswrapper[4898]: I0120 04:08:19.080817 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7d04727f-3596-48ce-a3d5-d4d0deb8fe89","Type":"ContainerStarted","Data":"19f57c27cac40d2455b47f398757cdf77247bed1e7299bab140990425581684c"} Jan 20 04:08:19 crc kubenswrapper[4898]: I0120 04:08:19.109036 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.109013973 podStartE2EDuration="2.109013973s" podCreationTimestamp="2026-01-20 04:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:08:19.099336107 +0000 UTC m=+1145.699123976" watchObservedRunningTime="2026-01-20 04:08:19.109013973 +0000 UTC m=+1145.708801842" Jan 20 04:08:20 crc kubenswrapper[4898]: I0120 04:08:20.114334 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:27 crc kubenswrapper[4898]: I0120 04:08:27.523610 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 20 04:08:27 crc kubenswrapper[4898]: I0120 04:08:27.988877 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4bpb9"] Jan 20 04:08:27 crc kubenswrapper[4898]: I0120 04:08:27.990136 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:27 crc kubenswrapper[4898]: I0120 04:08:27.997217 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 20 04:08:27 crc kubenswrapper[4898]: I0120 04:08:27.997648 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.002628 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4bpb9"] Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.170226 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkbw7\" (UniqueName: \"kubernetes.io/projected/853037a4-153d-47a7-bc24-69e16c937e41-kube-api-access-jkbw7\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.170283 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-scripts\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.170307 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.170370 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-config-data\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.174712 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.183282 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.193245 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.194515 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.196021 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.213122 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.229460 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.250159 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.273588 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad0af08d-c895-468d-9be8-2d484849537a-logs\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.273660 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbw7\" (UniqueName: \"kubernetes.io/projected/853037a4-153d-47a7-bc24-69e16c937e41-kube-api-access-jkbw7\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.273704 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-scripts\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.273731 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.273774 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.273808 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-config-data\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.273915 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-config-data\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.273940 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z84n2\" (UniqueName: \"kubernetes.io/projected/ad0af08d-c895-468d-9be8-2d484849537a-kube-api-access-z84n2\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.296378 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-scripts\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.328224 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.359564 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.361598 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.364736 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-config-data\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.374067 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbw7\" (UniqueName: \"kubernetes.io/projected/853037a4-153d-47a7-bc24-69e16c937e41-kube-api-access-jkbw7\") pod \"nova-cell0-cell-mapping-4bpb9\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.375019 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.375047 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.375126 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-config-data\") pod \"nova-scheduler-0\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.375159 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-config-data\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.375177 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z84n2\" (UniqueName: \"kubernetes.io/projected/ad0af08d-c895-468d-9be8-2d484849537a-kube-api-access-z84n2\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.375201 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad0af08d-c895-468d-9be8-2d484849537a-logs\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.375225 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j86dd\" (UniqueName: \"kubernetes.io/projected/2b5b8b70-7e54-4583-9541-1d7698db187a-kube-api-access-j86dd\") pod \"nova-scheduler-0\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.378378 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.379681 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad0af08d-c895-468d-9be8-2d484849537a-logs\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.392240 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-config-data\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.396463 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.397326 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.451662 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z84n2\" (UniqueName: \"kubernetes.io/projected/ad0af08d-c895-468d-9be8-2d484849537a-kube-api-access-z84n2\") pod \"nova-api-0\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.480399 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-config-data\") pod \"nova-scheduler-0\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.480488 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.480522 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75ddw\" (UniqueName: \"kubernetes.io/projected/a02121b8-aa89-467b-bfbc-8b04e2f198a0-kube-api-access-75ddw\") pod \"nova-cell1-novncproxy-0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.480555 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j86dd\" (UniqueName: \"kubernetes.io/projected/2b5b8b70-7e54-4583-9541-1d7698db187a-kube-api-access-j86dd\") pod \"nova-scheduler-0\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.480600 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.480628 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.498336 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-config-data\") pod \"nova-scheduler-0\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.498950 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.531520 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.555384 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j86dd\" (UniqueName: \"kubernetes.io/projected/2b5b8b70-7e54-4583-9541-1d7698db187a-kube-api-access-j86dd\") pod \"nova-scheduler-0\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.581116 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.582614 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.582753 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.582797 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75ddw\" (UniqueName: \"kubernetes.io/projected/a02121b8-aa89-467b-bfbc-8b04e2f198a0-kube-api-access-75ddw\") pod \"nova-cell1-novncproxy-0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.582954 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.588597 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.591890 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.598472 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.628348 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75ddw\" (UniqueName: \"kubernetes.io/projected/a02121b8-aa89-467b-bfbc-8b04e2f198a0-kube-api-access-75ddw\") pod \"nova-cell1-novncproxy-0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.640774 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.653608 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.687461 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.687583 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/514d1055-16b6-492a-93b7-55a9ea9153d8-logs\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.687632 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njl82\" (UniqueName: \"kubernetes.io/projected/514d1055-16b6-492a-93b7-55a9ea9153d8-kube-api-access-njl82\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.687694 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-config-data\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.693241 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsktj"] Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.695328 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.714895 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsktj"] Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.789498 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.789599 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-config\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.789653 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.789679 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.789711 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/514d1055-16b6-492a-93b7-55a9ea9153d8-logs\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.789817 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njl82\" (UniqueName: \"kubernetes.io/projected/514d1055-16b6-492a-93b7-55a9ea9153d8-kube-api-access-njl82\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.790172 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-config-data\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.790206 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.790269 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.790290 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxq8x\" (UniqueName: \"kubernetes.io/projected/77b4083e-c020-4b2b-8cac-cfb81dd3718c-kube-api-access-nxq8x\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.790171 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/514d1055-16b6-492a-93b7-55a9ea9153d8-logs\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.793945 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.794373 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-config-data\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.805328 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.807663 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njl82\" (UniqueName: \"kubernetes.io/projected/514d1055-16b6-492a-93b7-55a9ea9153d8-kube-api-access-njl82\") pod \"nova-metadata-0\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " pod="openstack/nova-metadata-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.857615 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.892568 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.892615 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxq8x\" (UniqueName: \"kubernetes.io/projected/77b4083e-c020-4b2b-8cac-cfb81dd3718c-kube-api-access-nxq8x\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.892681 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.893108 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-config\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.893151 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.893181 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.893623 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.893751 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.894050 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.894198 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-config\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.894449 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.909807 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxq8x\" (UniqueName: \"kubernetes.io/projected/77b4083e-c020-4b2b-8cac-cfb81dd3718c-kube-api-access-nxq8x\") pod \"dnsmasq-dns-bccf8f775-bsktj\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:28 crc kubenswrapper[4898]: I0120 04:08:28.915590 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.015739 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.315613 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ppbg2"] Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.317183 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.320028 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.320263 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.346086 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ppbg2"] Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.410814 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.410912 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-scripts\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.410935 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tvzv\" (UniqueName: \"kubernetes.io/projected/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-kube-api-access-5tvzv\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.411085 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-config-data\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.512615 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.512704 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-scripts\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.512728 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tvzv\" (UniqueName: \"kubernetes.io/projected/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-kube-api-access-5tvzv\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.512769 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-config-data\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.523589 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.524198 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-config-data\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.531997 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-scripts\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.538142 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tvzv\" (UniqueName: \"kubernetes.io/projected/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-kube-api-access-5tvzv\") pod \"nova-cell1-conductor-db-sync-ppbg2\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.600267 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4bpb9"] Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.655023 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.664625 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.692506 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.762484 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:29 crc kubenswrapper[4898]: I0120 04:08:29.770419 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsktj"] Jan 20 04:08:29 crc kubenswrapper[4898]: W0120 04:08:29.847274 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5b8b70_7e54_4583_9541_1d7698db187a.slice/crio-011624a155efa124703870dcf91c5ef0152cad48d0d901983827a858369dbaf3 WatchSource:0}: Error finding container 011624a155efa124703870dcf91c5ef0152cad48d0d901983827a858369dbaf3: Status 404 returned error can't find the container with id 011624a155efa124703870dcf91c5ef0152cad48d0d901983827a858369dbaf3 Jan 20 04:08:29 crc kubenswrapper[4898]: W0120 04:08:29.849161 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad0af08d_c895_468d_9be8_2d484849537a.slice/crio-736a1355a4b9452ee47ea55fe2e847084fddc4b6b8ecc8cbc63de94db84e87e4 WatchSource:0}: Error finding container 736a1355a4b9452ee47ea55fe2e847084fddc4b6b8ecc8cbc63de94db84e87e4: Status 404 returned error can't find the container with id 736a1355a4b9452ee47ea55fe2e847084fddc4b6b8ecc8cbc63de94db84e87e4 Jan 20 04:08:29 crc kubenswrapper[4898]: W0120 04:08:29.853881 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77b4083e_c020_4b2b_8cac_cfb81dd3718c.slice/crio-5ca30a41d2d4207df3b05d6cc107c27a79f06028afb7f2b86c91fe5d5865eebe WatchSource:0}: Error finding container 5ca30a41d2d4207df3b05d6cc107c27a79f06028afb7f2b86c91fe5d5865eebe: Status 404 returned error can't find the container with id 5ca30a41d2d4207df3b05d6cc107c27a79f06028afb7f2b86c91fe5d5865eebe Jan 20 04:08:29 crc kubenswrapper[4898]: W0120 04:08:29.860510 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod514d1055_16b6_492a_93b7_55a9ea9153d8.slice/crio-fa6eab647f027a111f2fb248504ffa612b2ba83c046c3db310384bf689623a15 WatchSource:0}: Error finding container fa6eab647f027a111f2fb248504ffa612b2ba83c046c3db310384bf689623a15: Status 404 returned error can't find the container with id fa6eab647f027a111f2fb248504ffa612b2ba83c046c3db310384bf689623a15 Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.087893 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 04:08:30 crc kubenswrapper[4898]: W0120 04:08:30.089712 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda02121b8_aa89_467b_bfbc_8b04e2f198a0.slice/crio-8826433209bcf9ddf9a9946db6f3bad85d1df8606a56bb0a550329d6e015ffeb WatchSource:0}: Error finding container 8826433209bcf9ddf9a9946db6f3bad85d1df8606a56bb0a550329d6e015ffeb: Status 404 returned error can't find the container with id 8826433209bcf9ddf9a9946db6f3bad85d1df8606a56bb0a550329d6e015ffeb Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.289046 4898 generic.go:334] "Generic (PLEG): container finished" podID="77b4083e-c020-4b2b-8cac-cfb81dd3718c" containerID="6131c1a75e24e478a2b4b04b41d3a29fe082005610602f731eb726ffd6cd4bb9" exitCode=0 Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.289273 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" event={"ID":"77b4083e-c020-4b2b-8cac-cfb81dd3718c","Type":"ContainerDied","Data":"6131c1a75e24e478a2b4b04b41d3a29fe082005610602f731eb726ffd6cd4bb9"} Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.289353 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" event={"ID":"77b4083e-c020-4b2b-8cac-cfb81dd3718c","Type":"ContainerStarted","Data":"5ca30a41d2d4207df3b05d6cc107c27a79f06028afb7f2b86c91fe5d5865eebe"} Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.291356 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad0af08d-c895-468d-9be8-2d484849537a","Type":"ContainerStarted","Data":"736a1355a4b9452ee47ea55fe2e847084fddc4b6b8ecc8cbc63de94db84e87e4"} Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.297343 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b5b8b70-7e54-4583-9541-1d7698db187a","Type":"ContainerStarted","Data":"011624a155efa124703870dcf91c5ef0152cad48d0d901983827a858369dbaf3"} Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.301201 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"514d1055-16b6-492a-93b7-55a9ea9153d8","Type":"ContainerStarted","Data":"fa6eab647f027a111f2fb248504ffa612b2ba83c046c3db310384bf689623a15"} Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.320546 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4bpb9" event={"ID":"853037a4-153d-47a7-bc24-69e16c937e41","Type":"ContainerStarted","Data":"628dedf9c3f0ab41144b38c7b6eebb8800326b22fcbe8ea61284a2c9718104d8"} Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.320609 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4bpb9" event={"ID":"853037a4-153d-47a7-bc24-69e16c937e41","Type":"ContainerStarted","Data":"056092cd5da41894b85fd817bf6b81f0c05df64b586f29bf85d7c8343e12d107"} Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.323239 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a02121b8-aa89-467b-bfbc-8b04e2f198a0","Type":"ContainerStarted","Data":"8826433209bcf9ddf9a9946db6f3bad85d1df8606a56bb0a550329d6e015ffeb"} Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.347716 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ppbg2"] Jan 20 04:08:30 crc kubenswrapper[4898]: I0120 04:08:30.356557 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4bpb9" podStartSLOduration=3.35654084 podStartE2EDuration="3.35654084s" podCreationTimestamp="2026-01-20 04:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:08:30.342488994 +0000 UTC m=+1156.942276893" watchObservedRunningTime="2026-01-20 04:08:30.35654084 +0000 UTC m=+1156.956328709" Jan 20 04:08:31 crc kubenswrapper[4898]: I0120 04:08:31.333786 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ppbg2" event={"ID":"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb","Type":"ContainerStarted","Data":"071a66f52ab08b214495046f98666855f818e4a699c1d6b8ee7793840d70f9ff"} Jan 20 04:08:31 crc kubenswrapper[4898]: I0120 04:08:31.334383 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ppbg2" event={"ID":"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb","Type":"ContainerStarted","Data":"6e2f5e2b849de3829baa5e150750856e587622d07fbdf1007361cadedc27b44d"} Jan 20 04:08:31 crc kubenswrapper[4898]: I0120 04:08:31.337085 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" event={"ID":"77b4083e-c020-4b2b-8cac-cfb81dd3718c","Type":"ContainerStarted","Data":"244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070"} Jan 20 04:08:31 crc kubenswrapper[4898]: I0120 04:08:31.337378 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:31 crc kubenswrapper[4898]: I0120 04:08:31.356548 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-ppbg2" podStartSLOduration=2.3565262000000002 podStartE2EDuration="2.3565262s" podCreationTimestamp="2026-01-20 04:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:08:31.347880005 +0000 UTC m=+1157.947667864" watchObservedRunningTime="2026-01-20 04:08:31.3565262 +0000 UTC m=+1157.956314059" Jan 20 04:08:31 crc kubenswrapper[4898]: I0120 04:08:31.379164 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" podStartSLOduration=3.379143898 podStartE2EDuration="3.379143898s" podCreationTimestamp="2026-01-20 04:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:08:31.373351854 +0000 UTC m=+1157.973139723" watchObservedRunningTime="2026-01-20 04:08:31.379143898 +0000 UTC m=+1157.978931757" Jan 20 04:08:31 crc kubenswrapper[4898]: I0120 04:08:31.580397 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 04:08:31 crc kubenswrapper[4898]: I0120 04:08:31.591355 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.362899 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b5b8b70-7e54-4583-9541-1d7698db187a","Type":"ContainerStarted","Data":"5bd92d31d383a7d2cca7f3fe344073507a8e328f64d5c798bf82b4b06b273842"} Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.367314 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"514d1055-16b6-492a-93b7-55a9ea9153d8","Type":"ContainerStarted","Data":"34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3"} Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.367345 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"514d1055-16b6-492a-93b7-55a9ea9153d8","Type":"ContainerStarted","Data":"ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6"} Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.367636 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="514d1055-16b6-492a-93b7-55a9ea9153d8" containerName="nova-metadata-log" containerID="cri-o://ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6" gracePeriod=30 Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.367876 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="514d1055-16b6-492a-93b7-55a9ea9153d8" containerName="nova-metadata-metadata" containerID="cri-o://34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3" gracePeriod=30 Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.369709 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a02121b8-aa89-467b-bfbc-8b04e2f198a0","Type":"ContainerStarted","Data":"5a2ed03b4d2609760f3955725dac6c87a66ccc6574183d749f6489f5cd5dbe20"} Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.369849 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a02121b8-aa89-467b-bfbc-8b04e2f198a0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5a2ed03b4d2609760f3955725dac6c87a66ccc6574183d749f6489f5cd5dbe20" gracePeriod=30 Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.376290 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad0af08d-c895-468d-9be8-2d484849537a","Type":"ContainerStarted","Data":"bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d"} Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.376342 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad0af08d-c895-468d-9be8-2d484849537a","Type":"ContainerStarted","Data":"3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82"} Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.386076 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.893777183 podStartE2EDuration="5.386059317s" podCreationTimestamp="2026-01-20 04:08:28 +0000 UTC" firstStartedPulling="2026-01-20 04:08:29.850683774 +0000 UTC m=+1156.450471633" lastFinishedPulling="2026-01-20 04:08:32.342965908 +0000 UTC m=+1158.942753767" observedRunningTime="2026-01-20 04:08:33.385045824 +0000 UTC m=+1159.984833693" watchObservedRunningTime="2026-01-20 04:08:33.386059317 +0000 UTC m=+1159.985847186" Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.400943 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.426463 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.966310544 podStartE2EDuration="5.426444718s" podCreationTimestamp="2026-01-20 04:08:28 +0000 UTC" firstStartedPulling="2026-01-20 04:08:29.882815184 +0000 UTC m=+1156.482603043" lastFinishedPulling="2026-01-20 04:08:32.342949348 +0000 UTC m=+1158.942737217" observedRunningTime="2026-01-20 04:08:33.420314504 +0000 UTC m=+1160.020102373" watchObservedRunningTime="2026-01-20 04:08:33.426444718 +0000 UTC m=+1160.026232577" Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.462900 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.935540427 podStartE2EDuration="5.462871424s" podCreationTimestamp="2026-01-20 04:08:28 +0000 UTC" firstStartedPulling="2026-01-20 04:08:29.8527584 +0000 UTC m=+1156.452546259" lastFinishedPulling="2026-01-20 04:08:32.380089397 +0000 UTC m=+1158.979877256" observedRunningTime="2026-01-20 04:08:33.441081353 +0000 UTC m=+1160.040869212" watchObservedRunningTime="2026-01-20 04:08:33.462871424 +0000 UTC m=+1160.062659283" Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.463235 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.213680486 podStartE2EDuration="5.463229276s" podCreationTimestamp="2026-01-20 04:08:28 +0000 UTC" firstStartedPulling="2026-01-20 04:08:30.093379787 +0000 UTC m=+1156.693167646" lastFinishedPulling="2026-01-20 04:08:32.342928577 +0000 UTC m=+1158.942716436" observedRunningTime="2026-01-20 04:08:33.456537824 +0000 UTC m=+1160.056325723" watchObservedRunningTime="2026-01-20 04:08:33.463229276 +0000 UTC m=+1160.063017135" Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.808524 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.858266 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.916167 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.916220 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 04:08:33 crc kubenswrapper[4898]: I0120 04:08:33.931460 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.026959 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njl82\" (UniqueName: \"kubernetes.io/projected/514d1055-16b6-492a-93b7-55a9ea9153d8-kube-api-access-njl82\") pod \"514d1055-16b6-492a-93b7-55a9ea9153d8\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.027151 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/514d1055-16b6-492a-93b7-55a9ea9153d8-logs\") pod \"514d1055-16b6-492a-93b7-55a9ea9153d8\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.027326 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-combined-ca-bundle\") pod \"514d1055-16b6-492a-93b7-55a9ea9153d8\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.027386 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-config-data\") pod \"514d1055-16b6-492a-93b7-55a9ea9153d8\" (UID: \"514d1055-16b6-492a-93b7-55a9ea9153d8\") " Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.027580 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/514d1055-16b6-492a-93b7-55a9ea9153d8-logs" (OuterVolumeSpecName: "logs") pod "514d1055-16b6-492a-93b7-55a9ea9153d8" (UID: "514d1055-16b6-492a-93b7-55a9ea9153d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.028336 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/514d1055-16b6-492a-93b7-55a9ea9153d8-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.032528 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514d1055-16b6-492a-93b7-55a9ea9153d8-kube-api-access-njl82" (OuterVolumeSpecName: "kube-api-access-njl82") pod "514d1055-16b6-492a-93b7-55a9ea9153d8" (UID: "514d1055-16b6-492a-93b7-55a9ea9153d8"). InnerVolumeSpecName "kube-api-access-njl82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.055100 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-config-data" (OuterVolumeSpecName: "config-data") pod "514d1055-16b6-492a-93b7-55a9ea9153d8" (UID: "514d1055-16b6-492a-93b7-55a9ea9153d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.063263 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "514d1055-16b6-492a-93b7-55a9ea9153d8" (UID: "514d1055-16b6-492a-93b7-55a9ea9153d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.130312 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.130555 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/514d1055-16b6-492a-93b7-55a9ea9153d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.130649 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njl82\" (UniqueName: \"kubernetes.io/projected/514d1055-16b6-492a-93b7-55a9ea9153d8-kube-api-access-njl82\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.388385 4898 generic.go:334] "Generic (PLEG): container finished" podID="514d1055-16b6-492a-93b7-55a9ea9153d8" containerID="34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3" exitCode=0 Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.388443 4898 generic.go:334] "Generic (PLEG): container finished" podID="514d1055-16b6-492a-93b7-55a9ea9153d8" containerID="ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6" exitCode=143 Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.388472 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"514d1055-16b6-492a-93b7-55a9ea9153d8","Type":"ContainerDied","Data":"34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3"} Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.388514 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"514d1055-16b6-492a-93b7-55a9ea9153d8","Type":"ContainerDied","Data":"ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6"} Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.388530 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"514d1055-16b6-492a-93b7-55a9ea9153d8","Type":"ContainerDied","Data":"fa6eab647f027a111f2fb248504ffa612b2ba83c046c3db310384bf689623a15"} Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.388460 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.388553 4898 scope.go:117] "RemoveContainer" containerID="34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.433545 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.450645 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.453642 4898 scope.go:117] "RemoveContainer" containerID="ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.471569 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:34 crc kubenswrapper[4898]: E0120 04:08:34.472159 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514d1055-16b6-492a-93b7-55a9ea9153d8" containerName="nova-metadata-log" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.472195 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="514d1055-16b6-492a-93b7-55a9ea9153d8" containerName="nova-metadata-log" Jan 20 04:08:34 crc kubenswrapper[4898]: E0120 04:08:34.472218 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514d1055-16b6-492a-93b7-55a9ea9153d8" containerName="nova-metadata-metadata" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.472224 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="514d1055-16b6-492a-93b7-55a9ea9153d8" containerName="nova-metadata-metadata" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.472958 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="514d1055-16b6-492a-93b7-55a9ea9153d8" containerName="nova-metadata-log" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.472993 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="514d1055-16b6-492a-93b7-55a9ea9153d8" containerName="nova-metadata-metadata" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.475103 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.479176 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.479355 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.484818 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.527477 4898 scope.go:117] "RemoveContainer" containerID="34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3" Jan 20 04:08:34 crc kubenswrapper[4898]: E0120 04:08:34.528368 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3\": container with ID starting with 34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3 not found: ID does not exist" containerID="34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.528473 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3"} err="failed to get container status \"34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3\": rpc error: code = NotFound desc = could not find container \"34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3\": container with ID starting with 34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3 not found: ID does not exist" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.528564 4898 scope.go:117] "RemoveContainer" containerID="ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6" Jan 20 04:08:34 crc kubenswrapper[4898]: E0120 04:08:34.528937 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6\": container with ID starting with ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6 not found: ID does not exist" containerID="ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.528977 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6"} err="failed to get container status \"ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6\": rpc error: code = NotFound desc = could not find container \"ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6\": container with ID starting with ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6 not found: ID does not exist" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.529004 4898 scope.go:117] "RemoveContainer" containerID="34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.529389 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3"} err="failed to get container status \"34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3\": rpc error: code = NotFound desc = could not find container \"34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3\": container with ID starting with 34860639eedb11389445de0c4736df4389b82e8b8bfd107feaabc27ba474a2d3 not found: ID does not exist" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.529460 4898 scope.go:117] "RemoveContainer" containerID="ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.530164 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6"} err="failed to get container status \"ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6\": rpc error: code = NotFound desc = could not find container \"ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6\": container with ID starting with ac0b7a747e98bfc3328f6eb6aa1e47525f0a72a2e13d546c1e49ffda5a6fa2c6 not found: ID does not exist" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.537859 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcwmn\" (UniqueName: \"kubernetes.io/projected/0c4db390-4c17-49bc-9932-30f42c018772-kube-api-access-vcwmn\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.538007 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-config-data\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.538046 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.538091 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.538163 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c4db390-4c17-49bc-9932-30f42c018772-logs\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.640395 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.640524 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c4db390-4c17-49bc-9932-30f42c018772-logs\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.640586 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcwmn\" (UniqueName: \"kubernetes.io/projected/0c4db390-4c17-49bc-9932-30f42c018772-kube-api-access-vcwmn\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.640683 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-config-data\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.640721 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.642005 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c4db390-4c17-49bc-9932-30f42c018772-logs\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.647264 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.649988 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-config-data\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.650004 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.674094 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcwmn\" (UniqueName: \"kubernetes.io/projected/0c4db390-4c17-49bc-9932-30f42c018772-kube-api-access-vcwmn\") pod \"nova-metadata-0\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " pod="openstack/nova-metadata-0" Jan 20 04:08:34 crc kubenswrapper[4898]: I0120 04:08:34.795943 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:08:35 crc kubenswrapper[4898]: W0120 04:08:35.285629 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c4db390_4c17_49bc_9932_30f42c018772.slice/crio-2296cf05e2c5c0a57ec9b9333dbfdfd84c5245c2fc45a5e4623c8195ae8d85de WatchSource:0}: Error finding container 2296cf05e2c5c0a57ec9b9333dbfdfd84c5245c2fc45a5e4623c8195ae8d85de: Status 404 returned error can't find the container with id 2296cf05e2c5c0a57ec9b9333dbfdfd84c5245c2fc45a5e4623c8195ae8d85de Jan 20 04:08:35 crc kubenswrapper[4898]: I0120 04:08:35.289503 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:35 crc kubenswrapper[4898]: I0120 04:08:35.399936 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c4db390-4c17-49bc-9932-30f42c018772","Type":"ContainerStarted","Data":"2296cf05e2c5c0a57ec9b9333dbfdfd84c5245c2fc45a5e4623c8195ae8d85de"} Jan 20 04:08:35 crc kubenswrapper[4898]: I0120 04:08:35.733553 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="514d1055-16b6-492a-93b7-55a9ea9153d8" path="/var/lib/kubelet/pods/514d1055-16b6-492a-93b7-55a9ea9153d8/volumes" Jan 20 04:08:36 crc kubenswrapper[4898]: I0120 04:08:36.413649 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c4db390-4c17-49bc-9932-30f42c018772","Type":"ContainerStarted","Data":"cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f"} Jan 20 04:08:36 crc kubenswrapper[4898]: I0120 04:08:36.413978 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c4db390-4c17-49bc-9932-30f42c018772","Type":"ContainerStarted","Data":"75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36"} Jan 20 04:08:37 crc kubenswrapper[4898]: I0120 04:08:37.264521 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.264504708 podStartE2EDuration="3.264504708s" podCreationTimestamp="2026-01-20 04:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:08:36.44982687 +0000 UTC m=+1163.049614729" watchObservedRunningTime="2026-01-20 04:08:37.264504708 +0000 UTC m=+1163.864292567" Jan 20 04:08:37 crc kubenswrapper[4898]: I0120 04:08:37.265562 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 04:08:37 crc kubenswrapper[4898]: I0120 04:08:37.265757 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d881812d-76d9-4618-8e72-815f0d9571f5" containerName="kube-state-metrics" containerID="cri-o://62bb4269d4b8ebb29a0c3dc83c0bcb1f3179b5128fe5725e125d2532abff148c" gracePeriod=30 Jan 20 04:08:37 crc kubenswrapper[4898]: I0120 04:08:37.433998 4898 generic.go:334] "Generic (PLEG): container finished" podID="d881812d-76d9-4618-8e72-815f0d9571f5" containerID="62bb4269d4b8ebb29a0c3dc83c0bcb1f3179b5128fe5725e125d2532abff148c" exitCode=2 Jan 20 04:08:37 crc kubenswrapper[4898]: I0120 04:08:37.434258 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d881812d-76d9-4618-8e72-815f0d9571f5","Type":"ContainerDied","Data":"62bb4269d4b8ebb29a0c3dc83c0bcb1f3179b5128fe5725e125d2532abff148c"} Jan 20 04:08:37 crc kubenswrapper[4898]: I0120 04:08:37.761888 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 04:08:37 crc kubenswrapper[4898]: I0120 04:08:37.803155 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg8ft\" (UniqueName: \"kubernetes.io/projected/d881812d-76d9-4618-8e72-815f0d9571f5-kube-api-access-jg8ft\") pod \"d881812d-76d9-4618-8e72-815f0d9571f5\" (UID: \"d881812d-76d9-4618-8e72-815f0d9571f5\") " Jan 20 04:08:37 crc kubenswrapper[4898]: I0120 04:08:37.811517 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d881812d-76d9-4618-8e72-815f0d9571f5-kube-api-access-jg8ft" (OuterVolumeSpecName: "kube-api-access-jg8ft") pod "d881812d-76d9-4618-8e72-815f0d9571f5" (UID: "d881812d-76d9-4618-8e72-815f0d9571f5"). InnerVolumeSpecName "kube-api-access-jg8ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:37 crc kubenswrapper[4898]: I0120 04:08:37.905073 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg8ft\" (UniqueName: \"kubernetes.io/projected/d881812d-76d9-4618-8e72-815f0d9571f5-kube-api-access-jg8ft\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.447739 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.447757 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d881812d-76d9-4618-8e72-815f0d9571f5","Type":"ContainerDied","Data":"84f98aab2fb267bf68fb413933524c8c92de45281d548f3f4039d42e5514c243"} Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.447866 4898 scope.go:117] "RemoveContainer" containerID="62bb4269d4b8ebb29a0c3dc83c0bcb1f3179b5128fe5725e125d2532abff148c" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.449085 4898 generic.go:334] "Generic (PLEG): container finished" podID="853037a4-153d-47a7-bc24-69e16c937e41" containerID="628dedf9c3f0ab41144b38c7b6eebb8800326b22fcbe8ea61284a2c9718104d8" exitCode=0 Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.449120 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4bpb9" event={"ID":"853037a4-153d-47a7-bc24-69e16c937e41","Type":"ContainerDied","Data":"628dedf9c3f0ab41144b38c7b6eebb8800326b22fcbe8ea61284a2c9718104d8"} Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.452278 4898 generic.go:334] "Generic (PLEG): container finished" podID="39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb" containerID="071a66f52ab08b214495046f98666855f818e4a699c1d6b8ee7793840d70f9ff" exitCode=0 Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.452346 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ppbg2" event={"ID":"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb","Type":"ContainerDied","Data":"071a66f52ab08b214495046f98666855f818e4a699c1d6b8ee7793840d70f9ff"} Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.507665 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.532909 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.532952 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.547283 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.565346 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 04:08:38 crc kubenswrapper[4898]: E0120 04:08:38.566176 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d881812d-76d9-4618-8e72-815f0d9571f5" containerName="kube-state-metrics" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.566201 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="d881812d-76d9-4618-8e72-815f0d9571f5" containerName="kube-state-metrics" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.566479 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="d881812d-76d9-4618-8e72-815f0d9571f5" containerName="kube-state-metrics" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.567341 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.569344 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.569719 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.578142 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.655881 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/035f57fe-ac66-4b46-93d4-26575736e9bb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.655981 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr7w9\" (UniqueName: \"kubernetes.io/projected/035f57fe-ac66-4b46-93d4-26575736e9bb-kube-api-access-gr7w9\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.656029 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035f57fe-ac66-4b46-93d4-26575736e9bb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.656098 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/035f57fe-ac66-4b46-93d4-26575736e9bb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: E0120 04:08:38.665659 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd881812d_76d9_4618_8e72_815f0d9571f5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd881812d_76d9_4618_8e72_815f0d9571f5.slice/crio-84f98aab2fb267bf68fb413933524c8c92de45281d548f3f4039d42e5514c243\": RecentStats: unable to find data in memory cache]" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.757857 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr7w9\" (UniqueName: \"kubernetes.io/projected/035f57fe-ac66-4b46-93d4-26575736e9bb-kube-api-access-gr7w9\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.758289 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035f57fe-ac66-4b46-93d4-26575736e9bb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.758561 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/035f57fe-ac66-4b46-93d4-26575736e9bb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.758907 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/035f57fe-ac66-4b46-93d4-26575736e9bb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.770924 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/035f57fe-ac66-4b46-93d4-26575736e9bb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.771129 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/035f57fe-ac66-4b46-93d4-26575736e9bb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.771402 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035f57fe-ac66-4b46-93d4-26575736e9bb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.777927 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr7w9\" (UniqueName: \"kubernetes.io/projected/035f57fe-ac66-4b46-93d4-26575736e9bb-kube-api-access-gr7w9\") pod \"kube-state-metrics-0\" (UID: \"035f57fe-ac66-4b46-93d4-26575736e9bb\") " pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.858678 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.882348 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 04:08:38 crc kubenswrapper[4898]: I0120 04:08:38.883930 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.017717 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.020247 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.020684 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="ceilometer-central-agent" containerID="cri-o://01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8" gracePeriod=30 Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.020768 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="sg-core" containerID="cri-o://1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942" gracePeriod=30 Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.020834 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="proxy-httpd" containerID="cri-o://8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0" gracePeriod=30 Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.020837 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="ceilometer-notification-agent" containerID="cri-o://75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56" gracePeriod=30 Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.086253 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zkj8r"] Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.086521 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" podUID="109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" containerName="dnsmasq-dns" containerID="cri-o://03d8fa03e72cee79da1294ff76f42fd52a90da35a5ca934d03562cb2dc30ba42" gracePeriod=10 Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.248409 4898 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" podUID="109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.388266 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.462654 4898 generic.go:334] "Generic (PLEG): container finished" podID="109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" containerID="03d8fa03e72cee79da1294ff76f42fd52a90da35a5ca934d03562cb2dc30ba42" exitCode=0 Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.462732 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" event={"ID":"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e","Type":"ContainerDied","Data":"03d8fa03e72cee79da1294ff76f42fd52a90da35a5ca934d03562cb2dc30ba42"} Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.464589 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"035f57fe-ac66-4b46-93d4-26575736e9bb","Type":"ContainerStarted","Data":"c098ef4a545d6590a5fe9951457b71554de27b6b0b1bc85696c8bceb5f7cc46c"} Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.467069 4898 generic.go:334] "Generic (PLEG): container finished" podID="c9599b5a-abed-4816-ac11-d54cf903104c" containerID="8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0" exitCode=0 Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.467095 4898 generic.go:334] "Generic (PLEG): container finished" podID="c9599b5a-abed-4816-ac11-d54cf903104c" containerID="1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942" exitCode=2 Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.467103 4898 generic.go:334] "Generic (PLEG): container finished" podID="c9599b5a-abed-4816-ac11-d54cf903104c" containerID="01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8" exitCode=0 Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.467279 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9599b5a-abed-4816-ac11-d54cf903104c","Type":"ContainerDied","Data":"8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0"} Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.467303 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9599b5a-abed-4816-ac11-d54cf903104c","Type":"ContainerDied","Data":"1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942"} Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.467313 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9599b5a-abed-4816-ac11-d54cf903104c","Type":"ContainerDied","Data":"01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8"} Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.511277 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.562406 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.614623 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ad0af08d-c895-468d-9be8-2d484849537a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.614964 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ad0af08d-c895-468d-9be8-2d484849537a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.689423 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-sb\") pod \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.689499 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-svc\") pod \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.689543 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlhqb\" (UniqueName: \"kubernetes.io/projected/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-kube-api-access-hlhqb\") pod \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.689571 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-nb\") pod \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.689618 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-config\") pod \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.689771 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-swift-storage-0\") pod \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\" (UID: \"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e\") " Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.700685 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-kube-api-access-hlhqb" (OuterVolumeSpecName: "kube-api-access-hlhqb") pod "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" (UID: "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e"). InnerVolumeSpecName "kube-api-access-hlhqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.734385 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d881812d-76d9-4618-8e72-815f0d9571f5" path="/var/lib/kubelet/pods/d881812d-76d9-4618-8e72-815f0d9571f5/volumes" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.766808 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" (UID: "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.780806 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" (UID: "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.780837 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" (UID: "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.792723 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-config" (OuterVolumeSpecName: "config") pod "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" (UID: "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.797021 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.797476 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.800617 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.800810 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" (UID: "109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.800860 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.800986 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlhqb\" (UniqueName: \"kubernetes.io/projected/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-kube-api-access-hlhqb\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.800997 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.801006 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.903565 4898 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.925962 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:39 crc kubenswrapper[4898]: I0120 04:08:39.984678 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.008162 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkbw7\" (UniqueName: \"kubernetes.io/projected/853037a4-153d-47a7-bc24-69e16c937e41-kube-api-access-jkbw7\") pod \"853037a4-153d-47a7-bc24-69e16c937e41\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.008457 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-scripts\") pod \"853037a4-153d-47a7-bc24-69e16c937e41\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.008558 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-combined-ca-bundle\") pod \"853037a4-153d-47a7-bc24-69e16c937e41\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.008681 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-config-data\") pod \"853037a4-153d-47a7-bc24-69e16c937e41\" (UID: \"853037a4-153d-47a7-bc24-69e16c937e41\") " Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.018035 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853037a4-153d-47a7-bc24-69e16c937e41-kube-api-access-jkbw7" (OuterVolumeSpecName: "kube-api-access-jkbw7") pod "853037a4-153d-47a7-bc24-69e16c937e41" (UID: "853037a4-153d-47a7-bc24-69e16c937e41"). InnerVolumeSpecName "kube-api-access-jkbw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.019077 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-scripts" (OuterVolumeSpecName: "scripts") pod "853037a4-153d-47a7-bc24-69e16c937e41" (UID: "853037a4-153d-47a7-bc24-69e16c937e41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.049964 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "853037a4-153d-47a7-bc24-69e16c937e41" (UID: "853037a4-153d-47a7-bc24-69e16c937e41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.051901 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-config-data" (OuterVolumeSpecName: "config-data") pod "853037a4-153d-47a7-bc24-69e16c937e41" (UID: "853037a4-153d-47a7-bc24-69e16c937e41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.110561 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-config-data\") pod \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.110646 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-combined-ca-bundle\") pod \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.110800 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-scripts\") pod \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.110897 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tvzv\" (UniqueName: \"kubernetes.io/projected/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-kube-api-access-5tvzv\") pod \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\" (UID: \"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb\") " Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.111263 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.111279 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.111287 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkbw7\" (UniqueName: \"kubernetes.io/projected/853037a4-153d-47a7-bc24-69e16c937e41-kube-api-access-jkbw7\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.111296 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/853037a4-153d-47a7-bc24-69e16c937e41-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.114080 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-kube-api-access-5tvzv" (OuterVolumeSpecName: "kube-api-access-5tvzv") pod "39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb" (UID: "39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb"). InnerVolumeSpecName "kube-api-access-5tvzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.114576 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-scripts" (OuterVolumeSpecName: "scripts") pod "39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb" (UID: "39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.138712 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-config-data" (OuterVolumeSpecName: "config-data") pod "39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb" (UID: "39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.139613 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb" (UID: "39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.212823 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tvzv\" (UniqueName: \"kubernetes.io/projected/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-kube-api-access-5tvzv\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.212857 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.212869 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.212879 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.489351 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" event={"ID":"109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e","Type":"ContainerDied","Data":"3d4ddb59b4552a7b325cc9846c938ad4b9526c49fcbf96e3f8b769f18ea3259e"} Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.489405 4898 scope.go:117] "RemoveContainer" containerID="03d8fa03e72cee79da1294ff76f42fd52a90da35a5ca934d03562cb2dc30ba42" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.489524 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zkj8r" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.496672 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4bpb9" event={"ID":"853037a4-153d-47a7-bc24-69e16c937e41","Type":"ContainerDied","Data":"056092cd5da41894b85fd817bf6b81f0c05df64b586f29bf85d7c8343e12d107"} Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.496711 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="056092cd5da41894b85fd817bf6b81f0c05df64b586f29bf85d7c8343e12d107" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.496768 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4bpb9" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.498545 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ppbg2" event={"ID":"39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb","Type":"ContainerDied","Data":"6e2f5e2b849de3829baa5e150750856e587622d07fbdf1007361cadedc27b44d"} Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.498599 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e2f5e2b849de3829baa5e150750856e587622d07fbdf1007361cadedc27b44d" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.498674 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ppbg2" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.507102 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"035f57fe-ac66-4b46-93d4-26575736e9bb","Type":"ContainerStarted","Data":"69c7fc29a9355ac507ae3848303ed63b0d81a2efbfda0ec11786eb77724df9c8"} Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.507288 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.541675 4898 scope.go:117] "RemoveContainer" containerID="2e3aac3935084d39b64cd2bcc9374f6964e91fffe35cc32c78b822f761e3c5b9" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.588576 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.214986516 podStartE2EDuration="2.588553993s" podCreationTimestamp="2026-01-20 04:08:38 +0000 UTC" firstStartedPulling="2026-01-20 04:08:39.408848839 +0000 UTC m=+1166.008636698" lastFinishedPulling="2026-01-20 04:08:39.782416326 +0000 UTC m=+1166.382204175" observedRunningTime="2026-01-20 04:08:40.519871633 +0000 UTC m=+1167.119659492" watchObservedRunningTime="2026-01-20 04:08:40.588553993 +0000 UTC m=+1167.188341852" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.601318 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zkj8r"] Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.610550 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zkj8r"] Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.628086 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 04:08:40 crc kubenswrapper[4898]: E0120 04:08:40.628509 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" containerName="init" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.628526 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" containerName="init" Jan 20 04:08:40 crc kubenswrapper[4898]: E0120 04:08:40.628547 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" containerName="dnsmasq-dns" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.628554 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" containerName="dnsmasq-dns" Jan 20 04:08:40 crc kubenswrapper[4898]: E0120 04:08:40.628578 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb" containerName="nova-cell1-conductor-db-sync" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.628585 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb" containerName="nova-cell1-conductor-db-sync" Jan 20 04:08:40 crc kubenswrapper[4898]: E0120 04:08:40.628599 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853037a4-153d-47a7-bc24-69e16c937e41" containerName="nova-manage" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.628604 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="853037a4-153d-47a7-bc24-69e16c937e41" containerName="nova-manage" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.628760 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb" containerName="nova-cell1-conductor-db-sync" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.628789 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" containerName="dnsmasq-dns" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.628803 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="853037a4-153d-47a7-bc24-69e16c937e41" containerName="nova-manage" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.629324 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.629403 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.651727 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.716575 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.722332 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f36a422-fdc8-447c-8abb-8deabac9c903-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0f36a422-fdc8-447c-8abb-8deabac9c903\") " pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.722451 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7bq2\" (UniqueName: \"kubernetes.io/projected/0f36a422-fdc8-447c-8abb-8deabac9c903-kube-api-access-s7bq2\") pod \"nova-cell1-conductor-0\" (UID: \"0f36a422-fdc8-447c-8abb-8deabac9c903\") " pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.722500 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f36a422-fdc8-447c-8abb-8deabac9c903-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0f36a422-fdc8-447c-8abb-8deabac9c903\") " pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.729723 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.730121 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ad0af08d-c895-468d-9be8-2d484849537a" containerName="nova-api-log" containerID="cri-o://3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82" gracePeriod=30 Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.730982 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ad0af08d-c895-468d-9be8-2d484849537a" containerName="nova-api-api" containerID="cri-o://bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d" gracePeriod=30 Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.806372 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.823663 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f36a422-fdc8-447c-8abb-8deabac9c903-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0f36a422-fdc8-447c-8abb-8deabac9c903\") " pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.823769 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7bq2\" (UniqueName: \"kubernetes.io/projected/0f36a422-fdc8-447c-8abb-8deabac9c903-kube-api-access-s7bq2\") pod \"nova-cell1-conductor-0\" (UID: \"0f36a422-fdc8-447c-8abb-8deabac9c903\") " pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.824723 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f36a422-fdc8-447c-8abb-8deabac9c903-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0f36a422-fdc8-447c-8abb-8deabac9c903\") " pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.830969 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f36a422-fdc8-447c-8abb-8deabac9c903-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0f36a422-fdc8-447c-8abb-8deabac9c903\") " pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.857623 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f36a422-fdc8-447c-8abb-8deabac9c903-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0f36a422-fdc8-447c-8abb-8deabac9c903\") " pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.873857 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7bq2\" (UniqueName: \"kubernetes.io/projected/0f36a422-fdc8-447c-8abb-8deabac9c903-kube-api-access-s7bq2\") pod \"nova-cell1-conductor-0\" (UID: \"0f36a422-fdc8-447c-8abb-8deabac9c903\") " pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:40 crc kubenswrapper[4898]: I0120 04:08:40.962824 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:41 crc kubenswrapper[4898]: I0120 04:08:41.460907 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 04:08:41 crc kubenswrapper[4898]: W0120 04:08:41.463923 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f36a422_fdc8_447c_8abb_8deabac9c903.slice/crio-c4f3c0d1c775001592b5678565b5a9d5f3942d45465008e9427422e4df85c260 WatchSource:0}: Error finding container c4f3c0d1c775001592b5678565b5a9d5f3942d45465008e9427422e4df85c260: Status 404 returned error can't find the container with id c4f3c0d1c775001592b5678565b5a9d5f3942d45465008e9427422e4df85c260 Jan 20 04:08:41 crc kubenswrapper[4898]: I0120 04:08:41.520874 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0f36a422-fdc8-447c-8abb-8deabac9c903","Type":"ContainerStarted","Data":"c4f3c0d1c775001592b5678565b5a9d5f3942d45465008e9427422e4df85c260"} Jan 20 04:08:41 crc kubenswrapper[4898]: I0120 04:08:41.524182 4898 generic.go:334] "Generic (PLEG): container finished" podID="ad0af08d-c895-468d-9be8-2d484849537a" containerID="3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82" exitCode=143 Jan 20 04:08:41 crc kubenswrapper[4898]: I0120 04:08:41.524278 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad0af08d-c895-468d-9be8-2d484849537a","Type":"ContainerDied","Data":"3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82"} Jan 20 04:08:41 crc kubenswrapper[4898]: I0120 04:08:41.527334 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2b5b8b70-7e54-4583-9541-1d7698db187a" containerName="nova-scheduler-scheduler" containerID="cri-o://5bd92d31d383a7d2cca7f3fe344073507a8e328f64d5c798bf82b4b06b273842" gracePeriod=30 Jan 20 04:08:41 crc kubenswrapper[4898]: I0120 04:08:41.528071 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0c4db390-4c17-49bc-9932-30f42c018772" containerName="nova-metadata-log" containerID="cri-o://75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36" gracePeriod=30 Jan 20 04:08:41 crc kubenswrapper[4898]: I0120 04:08:41.528236 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0c4db390-4c17-49bc-9932-30f42c018772" containerName="nova-metadata-metadata" containerID="cri-o://cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f" gracePeriod=30 Jan 20 04:08:41 crc kubenswrapper[4898]: I0120 04:08:41.746148 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e" path="/var/lib/kubelet/pods/109e3c7e-f3ff-4d73-a5ea-1f1d7555aa7e/volumes" Jan 20 04:08:41 crc kubenswrapper[4898]: I0120 04:08:41.913977 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.046942 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-run-httpd\") pod \"c9599b5a-abed-4816-ac11-d54cf903104c\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.047103 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-combined-ca-bundle\") pod \"c9599b5a-abed-4816-ac11-d54cf903104c\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.047244 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-log-httpd\") pod \"c9599b5a-abed-4816-ac11-d54cf903104c\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.047280 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-sg-core-conf-yaml\") pod \"c9599b5a-abed-4816-ac11-d54cf903104c\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.047324 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-config-data\") pod \"c9599b5a-abed-4816-ac11-d54cf903104c\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.047461 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-scripts\") pod \"c9599b5a-abed-4816-ac11-d54cf903104c\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.048013 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c9599b5a-abed-4816-ac11-d54cf903104c" (UID: "c9599b5a-abed-4816-ac11-d54cf903104c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.048226 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5czp\" (UniqueName: \"kubernetes.io/projected/c9599b5a-abed-4816-ac11-d54cf903104c-kube-api-access-z5czp\") pod \"c9599b5a-abed-4816-ac11-d54cf903104c\" (UID: \"c9599b5a-abed-4816-ac11-d54cf903104c\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.048687 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c9599b5a-abed-4816-ac11-d54cf903104c" (UID: "c9599b5a-abed-4816-ac11-d54cf903104c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.049420 4898 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.049449 4898 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9599b5a-abed-4816-ac11-d54cf903104c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.053574 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-scripts" (OuterVolumeSpecName: "scripts") pod "c9599b5a-abed-4816-ac11-d54cf903104c" (UID: "c9599b5a-abed-4816-ac11-d54cf903104c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.055203 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9599b5a-abed-4816-ac11-d54cf903104c-kube-api-access-z5czp" (OuterVolumeSpecName: "kube-api-access-z5czp") pod "c9599b5a-abed-4816-ac11-d54cf903104c" (UID: "c9599b5a-abed-4816-ac11-d54cf903104c"). InnerVolumeSpecName "kube-api-access-z5czp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.077984 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c9599b5a-abed-4816-ac11-d54cf903104c" (UID: "c9599b5a-abed-4816-ac11-d54cf903104c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.084072 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.151519 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5czp\" (UniqueName: \"kubernetes.io/projected/c9599b5a-abed-4816-ac11-d54cf903104c-kube-api-access-z5czp\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.151571 4898 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.151582 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.154156 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-config-data" (OuterVolumeSpecName: "config-data") pod "c9599b5a-abed-4816-ac11-d54cf903104c" (UID: "c9599b5a-abed-4816-ac11-d54cf903104c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.161818 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9599b5a-abed-4816-ac11-d54cf903104c" (UID: "c9599b5a-abed-4816-ac11-d54cf903104c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.255143 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-config-data\") pod \"0c4db390-4c17-49bc-9932-30f42c018772\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.255343 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c4db390-4c17-49bc-9932-30f42c018772-logs\") pod \"0c4db390-4c17-49bc-9932-30f42c018772\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.255384 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-nova-metadata-tls-certs\") pod \"0c4db390-4c17-49bc-9932-30f42c018772\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.255469 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-combined-ca-bundle\") pod \"0c4db390-4c17-49bc-9932-30f42c018772\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.255549 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcwmn\" (UniqueName: \"kubernetes.io/projected/0c4db390-4c17-49bc-9932-30f42c018772-kube-api-access-vcwmn\") pod \"0c4db390-4c17-49bc-9932-30f42c018772\" (UID: \"0c4db390-4c17-49bc-9932-30f42c018772\") " Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.255792 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c4db390-4c17-49bc-9932-30f42c018772-logs" (OuterVolumeSpecName: "logs") pod "0c4db390-4c17-49bc-9932-30f42c018772" (UID: "0c4db390-4c17-49bc-9932-30f42c018772"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.256242 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.256258 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c4db390-4c17-49bc-9932-30f42c018772-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.256269 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9599b5a-abed-4816-ac11-d54cf903104c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.269583 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4db390-4c17-49bc-9932-30f42c018772-kube-api-access-vcwmn" (OuterVolumeSpecName: "kube-api-access-vcwmn") pod "0c4db390-4c17-49bc-9932-30f42c018772" (UID: "0c4db390-4c17-49bc-9932-30f42c018772"). InnerVolumeSpecName "kube-api-access-vcwmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.277860 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c4db390-4c17-49bc-9932-30f42c018772" (UID: "0c4db390-4c17-49bc-9932-30f42c018772"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.278759 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-config-data" (OuterVolumeSpecName: "config-data") pod "0c4db390-4c17-49bc-9932-30f42c018772" (UID: "0c4db390-4c17-49bc-9932-30f42c018772"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.302711 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0c4db390-4c17-49bc-9932-30f42c018772" (UID: "0c4db390-4c17-49bc-9932-30f42c018772"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.358099 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcwmn\" (UniqueName: \"kubernetes.io/projected/0c4db390-4c17-49bc-9932-30f42c018772-kube-api-access-vcwmn\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.358143 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.358159 4898 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.358170 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4db390-4c17-49bc-9932-30f42c018772-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.538518 4898 generic.go:334] "Generic (PLEG): container finished" podID="c9599b5a-abed-4816-ac11-d54cf903104c" containerID="75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56" exitCode=0 Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.538622 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.538617 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9599b5a-abed-4816-ac11-d54cf903104c","Type":"ContainerDied","Data":"75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56"} Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.540549 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9599b5a-abed-4816-ac11-d54cf903104c","Type":"ContainerDied","Data":"5abf0b6702bc0849e75f8236517ffe3f193d76aa3945b02b29a55f7fac6af163"} Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.540607 4898 scope.go:117] "RemoveContainer" containerID="8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.543575 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0f36a422-fdc8-447c-8abb-8deabac9c903","Type":"ContainerStarted","Data":"a9eb35735875e249f63982bfa3f0ca22e163893df6969e4b1779dd3ae372b659"} Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.543650 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.546391 4898 generic.go:334] "Generic (PLEG): container finished" podID="0c4db390-4c17-49bc-9932-30f42c018772" containerID="cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f" exitCode=0 Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.546422 4898 generic.go:334] "Generic (PLEG): container finished" podID="0c4db390-4c17-49bc-9932-30f42c018772" containerID="75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36" exitCode=143 Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.547334 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.547388 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c4db390-4c17-49bc-9932-30f42c018772","Type":"ContainerDied","Data":"cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f"} Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.547694 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c4db390-4c17-49bc-9932-30f42c018772","Type":"ContainerDied","Data":"75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36"} Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.547728 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c4db390-4c17-49bc-9932-30f42c018772","Type":"ContainerDied","Data":"2296cf05e2c5c0a57ec9b9333dbfdfd84c5245c2fc45a5e4623c8195ae8d85de"} Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.574952 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.574925361 podStartE2EDuration="2.574925361s" podCreationTimestamp="2026-01-20 04:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:08:42.558946433 +0000 UTC m=+1169.158734292" watchObservedRunningTime="2026-01-20 04:08:42.574925361 +0000 UTC m=+1169.174713230" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.576493 4898 scope.go:117] "RemoveContainer" containerID="1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.607779 4898 scope.go:117] "RemoveContainer" containerID="75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.622607 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.634718 4898 scope.go:117] "RemoveContainer" containerID="01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.636534 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.653585 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.667471 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.672807 4898 scope.go:117] "RemoveContainer" containerID="8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0" Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.673448 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0\": container with ID starting with 8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0 not found: ID does not exist" containerID="8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.673481 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0"} err="failed to get container status \"8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0\": rpc error: code = NotFound desc = could not find container \"8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0\": container with ID starting with 8ccf8b8ded0699324b19646bcfef55b639c460d9906dcb21c266c6c60858b8b0 not found: ID does not exist" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.673509 4898 scope.go:117] "RemoveContainer" containerID="1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942" Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.674346 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942\": container with ID starting with 1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942 not found: ID does not exist" containerID="1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.674385 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942"} err="failed to get container status \"1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942\": rpc error: code = NotFound desc = could not find container \"1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942\": container with ID starting with 1295d9e231f578d7bb93b4d678c95b86e4a93580d4823a0f0e68e065891a2942 not found: ID does not exist" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.674421 4898 scope.go:117] "RemoveContainer" containerID="75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56" Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.674972 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56\": container with ID starting with 75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56 not found: ID does not exist" containerID="75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.674996 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56"} err="failed to get container status \"75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56\": rpc error: code = NotFound desc = could not find container \"75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56\": container with ID starting with 75dabbe1ac180ad9b08b688a5cae6b2504ee5ffe912cfa6f36212cdf02299a56 not found: ID does not exist" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.675014 4898 scope.go:117] "RemoveContainer" containerID="01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8" Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.675257 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8\": container with ID starting with 01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8 not found: ID does not exist" containerID="01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.675280 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8"} err="failed to get container status \"01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8\": rpc error: code = NotFound desc = could not find container \"01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8\": container with ID starting with 01270daffad3f2622c3a1ab9438890ad55bb92776fcb728f9030390723cbb8c8 not found: ID does not exist" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.675294 4898 scope.go:117] "RemoveContainer" containerID="cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.678048 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.685004 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="ceilometer-central-agent" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.685033 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="ceilometer-central-agent" Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.685080 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4db390-4c17-49bc-9932-30f42c018772" containerName="nova-metadata-metadata" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.685090 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4db390-4c17-49bc-9932-30f42c018772" containerName="nova-metadata-metadata" Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.685131 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4db390-4c17-49bc-9932-30f42c018772" containerName="nova-metadata-log" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.685140 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4db390-4c17-49bc-9932-30f42c018772" containerName="nova-metadata-log" Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.685159 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="sg-core" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.685172 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="sg-core" Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.685200 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="proxy-httpd" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.685207 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="proxy-httpd" Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.685239 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="ceilometer-notification-agent" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.685247 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="ceilometer-notification-agent" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.687796 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="ceilometer-notification-agent" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.687834 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c4db390-4c17-49bc-9932-30f42c018772" containerName="nova-metadata-metadata" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.687856 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="sg-core" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.687874 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="proxy-httpd" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.687900 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" containerName="ceilometer-central-agent" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.687917 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c4db390-4c17-49bc-9932-30f42c018772" containerName="nova-metadata-log" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.693777 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.698086 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.702566 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.704974 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.710124 4898 scope.go:117] "RemoveContainer" containerID="75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.718208 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.721003 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.723727 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.724393 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.729644 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.742813 4898 scope.go:117] "RemoveContainer" containerID="cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f" Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.743625 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f\": container with ID starting with cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f not found: ID does not exist" containerID="cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.743672 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f"} err="failed to get container status \"cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f\": rpc error: code = NotFound desc = could not find container \"cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f\": container with ID starting with cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f not found: ID does not exist" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.743722 4898 scope.go:117] "RemoveContainer" containerID="75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36" Jan 20 04:08:42 crc kubenswrapper[4898]: E0120 04:08:42.744132 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36\": container with ID starting with 75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36 not found: ID does not exist" containerID="75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.744154 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36"} err="failed to get container status \"75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36\": rpc error: code = NotFound desc = could not find container \"75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36\": container with ID starting with 75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36 not found: ID does not exist" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.744190 4898 scope.go:117] "RemoveContainer" containerID="cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.744561 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f"} err="failed to get container status \"cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f\": rpc error: code = NotFound desc = could not find container \"cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f\": container with ID starting with cb35b20bd76fe8d12c62f2a72c8992ccda6ae53efe9fc1f4418ebbdc2e5c4b2f not found: ID does not exist" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.744851 4898 scope.go:117] "RemoveContainer" containerID="75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.745162 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36"} err="failed to get container status \"75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36\": rpc error: code = NotFound desc = could not find container \"75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36\": container with ID starting with 75994e1218a322a6cc01a89c86b9be3bc695ef8e7db72eb938c9e398c05e5b36 not found: ID does not exist" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.746282 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.772045 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzk8h\" (UniqueName: \"kubernetes.io/projected/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-kube-api-access-nzk8h\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.772177 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-config-data\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.772240 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.772272 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-logs\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.772305 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.873597 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-config-data\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.873654 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-log-httpd\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.873732 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.873766 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-logs\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.873795 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.873818 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.873914 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.873945 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzk8h\" (UniqueName: \"kubernetes.io/projected/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-kube-api-access-nzk8h\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.873970 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrh8c\" (UniqueName: \"kubernetes.io/projected/c1fd031d-4a12-4968-aedd-fca53d682a02-kube-api-access-qrh8c\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.873999 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-scripts\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.874018 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.874040 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-config-data\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.874081 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-run-httpd\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.874817 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-logs\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.878578 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.879500 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.891746 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-config-data\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.892783 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzk8h\" (UniqueName: \"kubernetes.io/projected/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-kube-api-access-nzk8h\") pod \"nova-metadata-0\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " pod="openstack/nova-metadata-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.975662 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-log-httpd\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.975763 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.975830 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.975861 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrh8c\" (UniqueName: \"kubernetes.io/projected/c1fd031d-4a12-4968-aedd-fca53d682a02-kube-api-access-qrh8c\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.975885 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-scripts\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.975901 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.975926 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-config-data\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.975963 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-run-httpd\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.976491 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-run-httpd\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.976792 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-log-httpd\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.981381 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.981625 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-config-data\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.982748 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.983339 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.984094 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-scripts\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:42 crc kubenswrapper[4898]: I0120 04:08:42.994759 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrh8c\" (UniqueName: \"kubernetes.io/projected/c1fd031d-4a12-4968-aedd-fca53d682a02-kube-api-access-qrh8c\") pod \"ceilometer-0\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " pod="openstack/ceilometer-0" Jan 20 04:08:43 crc kubenswrapper[4898]: I0120 04:08:43.028872 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:08:43 crc kubenswrapper[4898]: I0120 04:08:43.043356 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:08:43 crc kubenswrapper[4898]: I0120 04:08:43.493193 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:08:43 crc kubenswrapper[4898]: I0120 04:08:43.567126 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61","Type":"ContainerStarted","Data":"3b2bb71cd3fc2c5b6deeb8ed690121d77c2f0425e4cca39323558d6feb5f83be"} Jan 20 04:08:43 crc kubenswrapper[4898]: I0120 04:08:43.588106 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:08:43 crc kubenswrapper[4898]: I0120 04:08:43.736158 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c4db390-4c17-49bc-9932-30f42c018772" path="/var/lib/kubelet/pods/0c4db390-4c17-49bc-9932-30f42c018772/volumes" Jan 20 04:08:43 crc kubenswrapper[4898]: I0120 04:08:43.736845 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9599b5a-abed-4816-ac11-d54cf903104c" path="/var/lib/kubelet/pods/c9599b5a-abed-4816-ac11-d54cf903104c/volumes" Jan 20 04:08:43 crc kubenswrapper[4898]: E0120 04:08:43.861479 4898 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5bd92d31d383a7d2cca7f3fe344073507a8e328f64d5c798bf82b4b06b273842" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 04:08:43 crc kubenswrapper[4898]: E0120 04:08:43.863945 4898 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5bd92d31d383a7d2cca7f3fe344073507a8e328f64d5c798bf82b4b06b273842" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 04:08:43 crc kubenswrapper[4898]: E0120 04:08:43.865343 4898 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5bd92d31d383a7d2cca7f3fe344073507a8e328f64d5c798bf82b4b06b273842" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 04:08:43 crc kubenswrapper[4898]: E0120 04:08:43.865486 4898 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2b5b8b70-7e54-4583-9541-1d7698db187a" containerName="nova-scheduler-scheduler" Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.582130 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61","Type":"ContainerStarted","Data":"7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed"} Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.582519 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61","Type":"ContainerStarted","Data":"f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84"} Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.585322 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1fd031d-4a12-4968-aedd-fca53d682a02","Type":"ContainerStarted","Data":"bf36dac6edad60db588e62c6eae56a0a220f4e1fae109df75d31b0efa76b5e24"} Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.585370 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1fd031d-4a12-4968-aedd-fca53d682a02","Type":"ContainerStarted","Data":"6da29edce1bb1f6ed185a52cc76e60f7ec64de77fa09df5d8facdd861916afa8"} Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.588222 4898 generic.go:334] "Generic (PLEG): container finished" podID="2b5b8b70-7e54-4583-9541-1d7698db187a" containerID="5bd92d31d383a7d2cca7f3fe344073507a8e328f64d5c798bf82b4b06b273842" exitCode=0 Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.588280 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b5b8b70-7e54-4583-9541-1d7698db187a","Type":"ContainerDied","Data":"5bd92d31d383a7d2cca7f3fe344073507a8e328f64d5c798bf82b4b06b273842"} Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.611618 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.611595794 podStartE2EDuration="2.611595794s" podCreationTimestamp="2026-01-20 04:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:08:44.610727166 +0000 UTC m=+1171.210515025" watchObservedRunningTime="2026-01-20 04:08:44.611595794 +0000 UTC m=+1171.211383653" Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.686204 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.826140 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j86dd\" (UniqueName: \"kubernetes.io/projected/2b5b8b70-7e54-4583-9541-1d7698db187a-kube-api-access-j86dd\") pod \"2b5b8b70-7e54-4583-9541-1d7698db187a\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.826409 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-config-data\") pod \"2b5b8b70-7e54-4583-9541-1d7698db187a\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.826499 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-combined-ca-bundle\") pod \"2b5b8b70-7e54-4583-9541-1d7698db187a\" (UID: \"2b5b8b70-7e54-4583-9541-1d7698db187a\") " Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.853679 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5b8b70-7e54-4583-9541-1d7698db187a-kube-api-access-j86dd" (OuterVolumeSpecName: "kube-api-access-j86dd") pod "2b5b8b70-7e54-4583-9541-1d7698db187a" (UID: "2b5b8b70-7e54-4583-9541-1d7698db187a"). InnerVolumeSpecName "kube-api-access-j86dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.881528 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b5b8b70-7e54-4583-9541-1d7698db187a" (UID: "2b5b8b70-7e54-4583-9541-1d7698db187a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.917487 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-config-data" (OuterVolumeSpecName: "config-data") pod "2b5b8b70-7e54-4583-9541-1d7698db187a" (UID: "2b5b8b70-7e54-4583-9541-1d7698db187a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.928853 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.928887 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5b8b70-7e54-4583-9541-1d7698db187a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:44 crc kubenswrapper[4898]: I0120 04:08:44.928899 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j86dd\" (UniqueName: \"kubernetes.io/projected/2b5b8b70-7e54-4583-9541-1d7698db187a-kube-api-access-j86dd\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.542033 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.600803 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1fd031d-4a12-4968-aedd-fca53d682a02","Type":"ContainerStarted","Data":"8114e96c1ebecd61606449497e1ddc6001d1c7581f4d826e0be8b8c8b7f074ae"} Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.604878 4898 generic.go:334] "Generic (PLEG): container finished" podID="ad0af08d-c895-468d-9be8-2d484849537a" containerID="bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d" exitCode=0 Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.605011 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad0af08d-c895-468d-9be8-2d484849537a","Type":"ContainerDied","Data":"bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d"} Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.605098 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad0af08d-c895-468d-9be8-2d484849537a","Type":"ContainerDied","Data":"736a1355a4b9452ee47ea55fe2e847084fddc4b6b8ecc8cbc63de94db84e87e4"} Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.605101 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.605126 4898 scope.go:117] "RemoveContainer" containerID="bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.606956 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b5b8b70-7e54-4583-9541-1d7698db187a","Type":"ContainerDied","Data":"011624a155efa124703870dcf91c5ef0152cad48d0d901983827a858369dbaf3"} Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.607001 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.636652 4898 scope.go:117] "RemoveContainer" containerID="3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.646448 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad0af08d-c895-468d-9be8-2d484849537a-logs\") pod \"ad0af08d-c895-468d-9be8-2d484849537a\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.646695 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-combined-ca-bundle\") pod \"ad0af08d-c895-468d-9be8-2d484849537a\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.646761 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z84n2\" (UniqueName: \"kubernetes.io/projected/ad0af08d-c895-468d-9be8-2d484849537a-kube-api-access-z84n2\") pod \"ad0af08d-c895-468d-9be8-2d484849537a\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.647101 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-config-data\") pod \"ad0af08d-c895-468d-9be8-2d484849537a\" (UID: \"ad0af08d-c895-468d-9be8-2d484849537a\") " Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.647004 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad0af08d-c895-468d-9be8-2d484849537a-logs" (OuterVolumeSpecName: "logs") pod "ad0af08d-c895-468d-9be8-2d484849537a" (UID: "ad0af08d-c895-468d-9be8-2d484849537a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.648567 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad0af08d-c895-468d-9be8-2d484849537a-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.656342 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0af08d-c895-468d-9be8-2d484849537a-kube-api-access-z84n2" (OuterVolumeSpecName: "kube-api-access-z84n2") pod "ad0af08d-c895-468d-9be8-2d484849537a" (UID: "ad0af08d-c895-468d-9be8-2d484849537a"). InnerVolumeSpecName "kube-api-access-z84n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.664189 4898 scope.go:117] "RemoveContainer" containerID="bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d" Jan 20 04:08:45 crc kubenswrapper[4898]: E0120 04:08:45.664624 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d\": container with ID starting with bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d not found: ID does not exist" containerID="bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.664664 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d"} err="failed to get container status \"bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d\": rpc error: code = NotFound desc = could not find container \"bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d\": container with ID starting with bd32c4e5fb71b14ec6404ee960cd1b0672c448a9dbab87e86c7a5e3d124b1f6d not found: ID does not exist" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.664688 4898 scope.go:117] "RemoveContainer" containerID="3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82" Jan 20 04:08:45 crc kubenswrapper[4898]: E0120 04:08:45.664906 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82\": container with ID starting with 3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82 not found: ID does not exist" containerID="3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.664929 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82"} err="failed to get container status \"3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82\": rpc error: code = NotFound desc = could not find container \"3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82\": container with ID starting with 3fa835e1796e065af5123ea3ba608559b6d51741aaf2e2a15454a72c62469c82 not found: ID does not exist" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.664941 4898 scope.go:117] "RemoveContainer" containerID="5bd92d31d383a7d2cca7f3fe344073507a8e328f64d5c798bf82b4b06b273842" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.679825 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.694063 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.703203 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:08:45 crc kubenswrapper[4898]: E0120 04:08:45.703982 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0af08d-c895-468d-9be8-2d484849537a" containerName="nova-api-log" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.704012 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0af08d-c895-468d-9be8-2d484849537a" containerName="nova-api-log" Jan 20 04:08:45 crc kubenswrapper[4898]: E0120 04:08:45.704043 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5b8b70-7e54-4583-9541-1d7698db187a" containerName="nova-scheduler-scheduler" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.704050 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5b8b70-7e54-4583-9541-1d7698db187a" containerName="nova-scheduler-scheduler" Jan 20 04:08:45 crc kubenswrapper[4898]: E0120 04:08:45.704088 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0af08d-c895-468d-9be8-2d484849537a" containerName="nova-api-api" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.704097 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0af08d-c895-468d-9be8-2d484849537a" containerName="nova-api-api" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.704354 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5b8b70-7e54-4583-9541-1d7698db187a" containerName="nova-scheduler-scheduler" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.704388 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0af08d-c895-468d-9be8-2d484849537a" containerName="nova-api-api" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.704420 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0af08d-c895-468d-9be8-2d484849537a" containerName="nova-api-log" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.705641 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.711362 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.713077 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-config-data" (OuterVolumeSpecName: "config-data") pod "ad0af08d-c895-468d-9be8-2d484849537a" (UID: "ad0af08d-c895-468d-9be8-2d484849537a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.730664 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad0af08d-c895-468d-9be8-2d484849537a" (UID: "ad0af08d-c895-468d-9be8-2d484849537a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.751979 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z84n2\" (UniqueName: \"kubernetes.io/projected/ad0af08d-c895-468d-9be8-2d484849537a-kube-api-access-z84n2\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.752015 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.752027 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0af08d-c895-468d-9be8-2d484849537a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.758333 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5b8b70-7e54-4583-9541-1d7698db187a" path="/var/lib/kubelet/pods/2b5b8b70-7e54-4583-9541-1d7698db187a/volumes" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.759102 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.853362 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-config-data\") pod \"nova-scheduler-0\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.853511 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.853612 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69vvc\" (UniqueName: \"kubernetes.io/projected/6b38ae07-5732-4b51-96cc-4981a61b4ade-kube-api-access-69vvc\") pod \"nova-scheduler-0\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.929309 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.937703 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.948964 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.950459 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.953499 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.955365 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69vvc\" (UniqueName: \"kubernetes.io/projected/6b38ae07-5732-4b51-96cc-4981a61b4ade-kube-api-access-69vvc\") pod \"nova-scheduler-0\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.955460 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-config-data\") pod \"nova-scheduler-0\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.955594 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.960007 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-config-data\") pod \"nova-scheduler-0\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.961968 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.964996 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:08:45 crc kubenswrapper[4898]: I0120 04:08:45.994729 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69vvc\" (UniqueName: \"kubernetes.io/projected/6b38ae07-5732-4b51-96cc-4981a61b4ade-kube-api-access-69vvc\") pod \"nova-scheduler-0\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " pod="openstack/nova-scheduler-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.030737 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.057885 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-config-data\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.057961 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws8kw\" (UniqueName: \"kubernetes.io/projected/f8d0a764-e953-4e39-a363-134813e0fbc6-kube-api-access-ws8kw\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.058193 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d0a764-e953-4e39-a363-134813e0fbc6-logs\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.058295 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.160024 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-config-data\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.160774 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws8kw\" (UniqueName: \"kubernetes.io/projected/f8d0a764-e953-4e39-a363-134813e0fbc6-kube-api-access-ws8kw\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.160827 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d0a764-e953-4e39-a363-134813e0fbc6-logs\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.160854 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.161749 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d0a764-e953-4e39-a363-134813e0fbc6-logs\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.168574 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.169101 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-config-data\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.181050 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws8kw\" (UniqueName: \"kubernetes.io/projected/f8d0a764-e953-4e39-a363-134813e0fbc6-kube-api-access-ws8kw\") pod \"nova-api-0\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.363001 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.523251 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:08:46 crc kubenswrapper[4898]: W0120 04:08:46.538899 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b38ae07_5732_4b51_96cc_4981a61b4ade.slice/crio-ad041e8324814c05f4d5c98f8f59994882014b4122a90f343a750408db8cacdb WatchSource:0}: Error finding container ad041e8324814c05f4d5c98f8f59994882014b4122a90f343a750408db8cacdb: Status 404 returned error can't find the container with id ad041e8324814c05f4d5c98f8f59994882014b4122a90f343a750408db8cacdb Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.616985 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b38ae07-5732-4b51-96cc-4981a61b4ade","Type":"ContainerStarted","Data":"ad041e8324814c05f4d5c98f8f59994882014b4122a90f343a750408db8cacdb"} Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.618900 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1fd031d-4a12-4968-aedd-fca53d682a02","Type":"ContainerStarted","Data":"639d6b1a7ebd8d21581df66c5f139e40ab8dbbed6d6eb0a97e140f37e5bc79f0"} Jan 20 04:08:46 crc kubenswrapper[4898]: I0120 04:08:46.841951 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:08:46 crc kubenswrapper[4898]: W0120 04:08:46.845012 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice/crio-42e115cdac8523ea06c622ffa25eb73364e8a91aa1d27e1315fb809e276f265d WatchSource:0}: Error finding container 42e115cdac8523ea06c622ffa25eb73364e8a91aa1d27e1315fb809e276f265d: Status 404 returned error can't find the container with id 42e115cdac8523ea06c622ffa25eb73364e8a91aa1d27e1315fb809e276f265d Jan 20 04:08:47 crc kubenswrapper[4898]: I0120 04:08:47.636285 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8d0a764-e953-4e39-a363-134813e0fbc6","Type":"ContainerStarted","Data":"72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0"} Jan 20 04:08:47 crc kubenswrapper[4898]: I0120 04:08:47.636783 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8d0a764-e953-4e39-a363-134813e0fbc6","Type":"ContainerStarted","Data":"1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625"} Jan 20 04:08:47 crc kubenswrapper[4898]: I0120 04:08:47.636798 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8d0a764-e953-4e39-a363-134813e0fbc6","Type":"ContainerStarted","Data":"42e115cdac8523ea06c622ffa25eb73364e8a91aa1d27e1315fb809e276f265d"} Jan 20 04:08:47 crc kubenswrapper[4898]: I0120 04:08:47.640739 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1fd031d-4a12-4968-aedd-fca53d682a02","Type":"ContainerStarted","Data":"7d48a59843546b7202d6c6e0addaa6ad2ae2a6ff46508e949dbfb9a71cf1059f"} Jan 20 04:08:47 crc kubenswrapper[4898]: I0120 04:08:47.641794 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 04:08:47 crc kubenswrapper[4898]: I0120 04:08:47.644567 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b38ae07-5732-4b51-96cc-4981a61b4ade","Type":"ContainerStarted","Data":"6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d"} Jan 20 04:08:47 crc kubenswrapper[4898]: I0120 04:08:47.660014 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.659997669 podStartE2EDuration="2.659997669s" podCreationTimestamp="2026-01-20 04:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:08:47.655724424 +0000 UTC m=+1174.255512293" watchObservedRunningTime="2026-01-20 04:08:47.659997669 +0000 UTC m=+1174.259785528" Jan 20 04:08:47 crc kubenswrapper[4898]: I0120 04:08:47.684649 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6846307510000003 podStartE2EDuration="2.684630751s" podCreationTimestamp="2026-01-20 04:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:08:47.67671878 +0000 UTC m=+1174.276506659" watchObservedRunningTime="2026-01-20 04:08:47.684630751 +0000 UTC m=+1174.284418620" Jan 20 04:08:47 crc kubenswrapper[4898]: I0120 04:08:47.710571 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.954528508 podStartE2EDuration="5.710544744s" podCreationTimestamp="2026-01-20 04:08:42 +0000 UTC" firstStartedPulling="2026-01-20 04:08:43.596850715 +0000 UTC m=+1170.196638574" lastFinishedPulling="2026-01-20 04:08:47.352866951 +0000 UTC m=+1173.952654810" observedRunningTime="2026-01-20 04:08:47.701494396 +0000 UTC m=+1174.301282245" watchObservedRunningTime="2026-01-20 04:08:47.710544744 +0000 UTC m=+1174.310332613" Jan 20 04:08:47 crc kubenswrapper[4898]: I0120 04:08:47.777361 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0af08d-c895-468d-9be8-2d484849537a" path="/var/lib/kubelet/pods/ad0af08d-c895-468d-9be8-2d484849537a/volumes" Jan 20 04:08:48 crc kubenswrapper[4898]: I0120 04:08:48.030385 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 04:08:48 crc kubenswrapper[4898]: I0120 04:08:48.030466 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 04:08:48 crc kubenswrapper[4898]: I0120 04:08:48.898913 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 20 04:08:51 crc kubenswrapper[4898]: I0120 04:08:51.016189 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 20 04:08:51 crc kubenswrapper[4898]: I0120 04:08:51.033153 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 04:08:53 crc kubenswrapper[4898]: I0120 04:08:53.030124 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 04:08:53 crc kubenswrapper[4898]: I0120 04:08:53.030630 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 04:08:54 crc kubenswrapper[4898]: I0120 04:08:54.045530 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 04:08:54 crc kubenswrapper[4898]: I0120 04:08:54.045557 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 04:08:56 crc kubenswrapper[4898]: I0120 04:08:56.032495 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 04:08:56 crc kubenswrapper[4898]: I0120 04:08:56.093128 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 04:08:56 crc kubenswrapper[4898]: I0120 04:08:56.363242 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 04:08:56 crc kubenswrapper[4898]: I0120 04:08:56.363279 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 04:08:56 crc kubenswrapper[4898]: I0120 04:08:56.767498 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 04:08:57 crc kubenswrapper[4898]: I0120 04:08:57.445798 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 04:08:57 crc kubenswrapper[4898]: I0120 04:08:57.446238 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.037738 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.038470 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.044808 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.047758 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.812607 4898 generic.go:334] "Generic (PLEG): container finished" podID="a02121b8-aa89-467b-bfbc-8b04e2f198a0" containerID="5a2ed03b4d2609760f3955725dac6c87a66ccc6574183d749f6489f5cd5dbe20" exitCode=137 Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.812698 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a02121b8-aa89-467b-bfbc-8b04e2f198a0","Type":"ContainerDied","Data":"5a2ed03b4d2609760f3955725dac6c87a66ccc6574183d749f6489f5cd5dbe20"} Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.813017 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a02121b8-aa89-467b-bfbc-8b04e2f198a0","Type":"ContainerDied","Data":"8826433209bcf9ddf9a9946db6f3bad85d1df8606a56bb0a550329d6e015ffeb"} Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.813049 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8826433209bcf9ddf9a9946db6f3bad85d1df8606a56bb0a550329d6e015ffeb" Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.881052 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.912269 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-config-data\") pod \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.912470 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-combined-ca-bundle\") pod \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.912578 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75ddw\" (UniqueName: \"kubernetes.io/projected/a02121b8-aa89-467b-bfbc-8b04e2f198a0-kube-api-access-75ddw\") pod \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\" (UID: \"a02121b8-aa89-467b-bfbc-8b04e2f198a0\") " Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.926094 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02121b8-aa89-467b-bfbc-8b04e2f198a0-kube-api-access-75ddw" (OuterVolumeSpecName: "kube-api-access-75ddw") pod "a02121b8-aa89-467b-bfbc-8b04e2f198a0" (UID: "a02121b8-aa89-467b-bfbc-8b04e2f198a0"). InnerVolumeSpecName "kube-api-access-75ddw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.959906 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a02121b8-aa89-467b-bfbc-8b04e2f198a0" (UID: "a02121b8-aa89-467b-bfbc-8b04e2f198a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:03 crc kubenswrapper[4898]: I0120 04:09:03.965450 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-config-data" (OuterVolumeSpecName: "config-data") pod "a02121b8-aa89-467b-bfbc-8b04e2f198a0" (UID: "a02121b8-aa89-467b-bfbc-8b04e2f198a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.021738 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75ddw\" (UniqueName: \"kubernetes.io/projected/a02121b8-aa89-467b-bfbc-8b04e2f198a0-kube-api-access-75ddw\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.022025 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.022038 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02121b8-aa89-467b-bfbc-8b04e2f198a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.820183 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.863799 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.874161 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.885190 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 04:09:04 crc kubenswrapper[4898]: E0120 04:09:04.885867 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02121b8-aa89-467b-bfbc-8b04e2f198a0" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.885900 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02121b8-aa89-467b-bfbc-8b04e2f198a0" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.886214 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02121b8-aa89-467b-bfbc-8b04e2f198a0" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.887220 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.891185 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.892444 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.894175 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.896124 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.939069 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.939381 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.939506 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.939651 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:04 crc kubenswrapper[4898]: I0120 04:09:04.939757 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnl57\" (UniqueName: \"kubernetes.io/projected/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-kube-api-access-bnl57\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.041078 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.041146 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.041239 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.041280 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnl57\" (UniqueName: \"kubernetes.io/projected/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-kube-api-access-bnl57\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.041361 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.048511 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.048663 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.048989 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.050357 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.064854 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnl57\" (UniqueName: \"kubernetes.io/projected/5e24c3c5-6665-47c4-a82b-0b2c0da055ea-kube-api-access-bnl57\") pod \"nova-cell1-novncproxy-0\" (UID: \"5e24c3c5-6665-47c4-a82b-0b2c0da055ea\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.222204 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.739517 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a02121b8-aa89-467b-bfbc-8b04e2f198a0" path="/var/lib/kubelet/pods/a02121b8-aa89-467b-bfbc-8b04e2f198a0/volumes" Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.741393 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 04:09:05 crc kubenswrapper[4898]: I0120 04:09:05.830812 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5e24c3c5-6665-47c4-a82b-0b2c0da055ea","Type":"ContainerStarted","Data":"05429df0ade21934c38a8e6c629f3ac96e2f352fdb7686b48f2196ed243363b1"} Jan 20 04:09:06 crc kubenswrapper[4898]: I0120 04:09:06.368254 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 04:09:06 crc kubenswrapper[4898]: I0120 04:09:06.368837 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 04:09:06 crc kubenswrapper[4898]: I0120 04:09:06.380100 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 04:09:06 crc kubenswrapper[4898]: I0120 04:09:06.381372 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 04:09:06 crc kubenswrapper[4898]: I0120 04:09:06.844834 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5e24c3c5-6665-47c4-a82b-0b2c0da055ea","Type":"ContainerStarted","Data":"06f97456365254c68366b15d4f1f72102bb7f23b4f614c50f834377dfaf1a4bd"} Jan 20 04:09:06 crc kubenswrapper[4898]: I0120 04:09:06.845325 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 04:09:06 crc kubenswrapper[4898]: I0120 04:09:06.861532 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 04:09:06 crc kubenswrapper[4898]: I0120 04:09:06.884375 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.884347299 podStartE2EDuration="2.884347299s" podCreationTimestamp="2026-01-20 04:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:09:06.870577402 +0000 UTC m=+1193.470365291" watchObservedRunningTime="2026-01-20 04:09:06.884347299 +0000 UTC m=+1193.484135168" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.090539 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-9cg5d"] Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.093388 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.117599 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-9cg5d"] Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.208552 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.208632 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.208680 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwfn\" (UniqueName: \"kubernetes.io/projected/45b57056-2421-4715-a5ec-3c8f74566387-kube-api-access-nnwfn\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.208803 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-config\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.208839 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.208890 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.310490 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-config\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.310953 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.311012 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.311071 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.311147 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.311187 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnwfn\" (UniqueName: \"kubernetes.io/projected/45b57056-2421-4715-a5ec-3c8f74566387-kube-api-access-nnwfn\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.311890 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-config\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.313046 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.313385 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.313638 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.313744 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45b57056-2421-4715-a5ec-3c8f74566387-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.336051 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnwfn\" (UniqueName: \"kubernetes.io/projected/45b57056-2421-4715-a5ec-3c8f74566387-kube-api-access-nnwfn\") pod \"dnsmasq-dns-cd5cbd7b9-9cg5d\" (UID: \"45b57056-2421-4715-a5ec-3c8f74566387\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:07 crc kubenswrapper[4898]: I0120 04:09:07.418954 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:08 crc kubenswrapper[4898]: I0120 04:09:08.047376 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-9cg5d"] Jan 20 04:09:08 crc kubenswrapper[4898]: W0120 04:09:08.055843 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45b57056_2421_4715_a5ec_3c8f74566387.slice/crio-9761d9f7de8e40e2a1ec78c5d921c102a43142486ed3d714622d5c390a5d7799 WatchSource:0}: Error finding container 9761d9f7de8e40e2a1ec78c5d921c102a43142486ed3d714622d5c390a5d7799: Status 404 returned error can't find the container with id 9761d9f7de8e40e2a1ec78c5d921c102a43142486ed3d714622d5c390a5d7799 Jan 20 04:09:08 crc kubenswrapper[4898]: I0120 04:09:08.861426 4898 generic.go:334] "Generic (PLEG): container finished" podID="45b57056-2421-4715-a5ec-3c8f74566387" containerID="1e3da8adf1a0e00cb2b771a3f0f65359d5d765d8b1f8167a9813a5e90881cd48" exitCode=0 Jan 20 04:09:08 crc kubenswrapper[4898]: I0120 04:09:08.863572 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" event={"ID":"45b57056-2421-4715-a5ec-3c8f74566387","Type":"ContainerDied","Data":"1e3da8adf1a0e00cb2b771a3f0f65359d5d765d8b1f8167a9813a5e90881cd48"} Jan 20 04:09:08 crc kubenswrapper[4898]: I0120 04:09:08.863625 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" event={"ID":"45b57056-2421-4715-a5ec-3c8f74566387","Type":"ContainerStarted","Data":"9761d9f7de8e40e2a1ec78c5d921c102a43142486ed3d714622d5c390a5d7799"} Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.279199 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.279862 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="ceilometer-central-agent" containerID="cri-o://bf36dac6edad60db588e62c6eae56a0a220f4e1fae109df75d31b0efa76b5e24" gracePeriod=30 Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.280012 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="proxy-httpd" containerID="cri-o://7d48a59843546b7202d6c6e0addaa6ad2ae2a6ff46508e949dbfb9a71cf1059f" gracePeriod=30 Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.280104 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="sg-core" containerID="cri-o://639d6b1a7ebd8d21581df66c5f139e40ab8dbbed6d6eb0a97e140f37e5bc79f0" gracePeriod=30 Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.286254 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.286872 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="ceilometer-notification-agent" containerID="cri-o://8114e96c1ebecd61606449497e1ddc6001d1c7581f4d826e0be8b8c8b7f074ae" gracePeriod=30 Jan 20 04:09:09 crc kubenswrapper[4898]: E0120 04:09:09.450908 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1fd031d_4a12_4968_aedd_fca53d682a02.slice/crio-conmon-639d6b1a7ebd8d21581df66c5f139e40ab8dbbed6d6eb0a97e140f37e5bc79f0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1fd031d_4a12_4968_aedd_fca53d682a02.slice/crio-639d6b1a7ebd8d21581df66c5f139e40ab8dbbed6d6eb0a97e140f37e5bc79f0.scope\": RecentStats: unable to find data in memory cache]" Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.747260 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.872589 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" event={"ID":"45b57056-2421-4715-a5ec-3c8f74566387","Type":"ContainerStarted","Data":"c0550cdfe6b963e262266c519d87d95f29d996eed7650cccdc402460abbe1629"} Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.872726 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.875509 4898 generic.go:334] "Generic (PLEG): container finished" podID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerID="7d48a59843546b7202d6c6e0addaa6ad2ae2a6ff46508e949dbfb9a71cf1059f" exitCode=0 Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.875558 4898 generic.go:334] "Generic (PLEG): container finished" podID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerID="639d6b1a7ebd8d21581df66c5f139e40ab8dbbed6d6eb0a97e140f37e5bc79f0" exitCode=2 Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.875570 4898 generic.go:334] "Generic (PLEG): container finished" podID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerID="bf36dac6edad60db588e62c6eae56a0a220f4e1fae109df75d31b0efa76b5e24" exitCode=0 Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.875603 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1fd031d-4a12-4968-aedd-fca53d682a02","Type":"ContainerDied","Data":"7d48a59843546b7202d6c6e0addaa6ad2ae2a6ff46508e949dbfb9a71cf1059f"} Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.875659 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1fd031d-4a12-4968-aedd-fca53d682a02","Type":"ContainerDied","Data":"639d6b1a7ebd8d21581df66c5f139e40ab8dbbed6d6eb0a97e140f37e5bc79f0"} Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.875680 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1fd031d-4a12-4968-aedd-fca53d682a02","Type":"ContainerDied","Data":"bf36dac6edad60db588e62c6eae56a0a220f4e1fae109df75d31b0efa76b5e24"} Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.875791 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerName="nova-api-log" containerID="cri-o://1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625" gracePeriod=30 Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.875814 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerName="nova-api-api" containerID="cri-o://72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0" gracePeriod=30 Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.898957 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" podStartSLOduration=2.898939822 podStartE2EDuration="2.898939822s" podCreationTimestamp="2026-01-20 04:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:09:09.891688862 +0000 UTC m=+1196.491476731" watchObservedRunningTime="2026-01-20 04:09:09.898939822 +0000 UTC m=+1196.498727681" Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.975774 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:09:09 crc kubenswrapper[4898]: I0120 04:09:09.975833 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:09:10 crc kubenswrapper[4898]: I0120 04:09:10.223111 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:10 crc kubenswrapper[4898]: I0120 04:09:10.891883 4898 generic.go:334] "Generic (PLEG): container finished" podID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerID="1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625" exitCode=143 Jan 20 04:09:10 crc kubenswrapper[4898]: I0120 04:09:10.892147 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8d0a764-e953-4e39-a363-134813e0fbc6","Type":"ContainerDied","Data":"1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625"} Jan 20 04:09:10 crc kubenswrapper[4898]: I0120 04:09:10.896057 4898 generic.go:334] "Generic (PLEG): container finished" podID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerID="8114e96c1ebecd61606449497e1ddc6001d1c7581f4d826e0be8b8c8b7f074ae" exitCode=0 Jan 20 04:09:10 crc kubenswrapper[4898]: I0120 04:09:10.896227 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1fd031d-4a12-4968-aedd-fca53d682a02","Type":"ContainerDied","Data":"8114e96c1ebecd61606449497e1ddc6001d1c7581f4d826e0be8b8c8b7f074ae"} Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.166667 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.302440 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-ceilometer-tls-certs\") pod \"c1fd031d-4a12-4968-aedd-fca53d682a02\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.302580 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-combined-ca-bundle\") pod \"c1fd031d-4a12-4968-aedd-fca53d682a02\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.302662 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-log-httpd\") pod \"c1fd031d-4a12-4968-aedd-fca53d682a02\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.302867 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrh8c\" (UniqueName: \"kubernetes.io/projected/c1fd031d-4a12-4968-aedd-fca53d682a02-kube-api-access-qrh8c\") pod \"c1fd031d-4a12-4968-aedd-fca53d682a02\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.302898 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-sg-core-conf-yaml\") pod \"c1fd031d-4a12-4968-aedd-fca53d682a02\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.302925 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-config-data\") pod \"c1fd031d-4a12-4968-aedd-fca53d682a02\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.302996 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-scripts\") pod \"c1fd031d-4a12-4968-aedd-fca53d682a02\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.303023 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-run-httpd\") pod \"c1fd031d-4a12-4968-aedd-fca53d682a02\" (UID: \"c1fd031d-4a12-4968-aedd-fca53d682a02\") " Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.303325 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c1fd031d-4a12-4968-aedd-fca53d682a02" (UID: "c1fd031d-4a12-4968-aedd-fca53d682a02"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.303838 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c1fd031d-4a12-4968-aedd-fca53d682a02" (UID: "c1fd031d-4a12-4968-aedd-fca53d682a02"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.304375 4898 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.304398 4898 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1fd031d-4a12-4968-aedd-fca53d682a02-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.309396 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1fd031d-4a12-4968-aedd-fca53d682a02-kube-api-access-qrh8c" (OuterVolumeSpecName: "kube-api-access-qrh8c") pod "c1fd031d-4a12-4968-aedd-fca53d682a02" (UID: "c1fd031d-4a12-4968-aedd-fca53d682a02"). InnerVolumeSpecName "kube-api-access-qrh8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.310047 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-scripts" (OuterVolumeSpecName: "scripts") pod "c1fd031d-4a12-4968-aedd-fca53d682a02" (UID: "c1fd031d-4a12-4968-aedd-fca53d682a02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.343574 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c1fd031d-4a12-4968-aedd-fca53d682a02" (UID: "c1fd031d-4a12-4968-aedd-fca53d682a02"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.365991 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c1fd031d-4a12-4968-aedd-fca53d682a02" (UID: "c1fd031d-4a12-4968-aedd-fca53d682a02"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.390113 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1fd031d-4a12-4968-aedd-fca53d682a02" (UID: "c1fd031d-4a12-4968-aedd-fca53d682a02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.405942 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.405970 4898 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.405982 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.405992 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrh8c\" (UniqueName: \"kubernetes.io/projected/c1fd031d-4a12-4968-aedd-fca53d682a02-kube-api-access-qrh8c\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.406001 4898 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.429705 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-config-data" (OuterVolumeSpecName: "config-data") pod "c1fd031d-4a12-4968-aedd-fca53d682a02" (UID: "c1fd031d-4a12-4968-aedd-fca53d682a02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.508364 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1fd031d-4a12-4968-aedd-fca53d682a02-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.911346 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1fd031d-4a12-4968-aedd-fca53d682a02","Type":"ContainerDied","Data":"6da29edce1bb1f6ed185a52cc76e60f7ec64de77fa09df5d8facdd861916afa8"} Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.911403 4898 scope.go:117] "RemoveContainer" containerID="7d48a59843546b7202d6c6e0addaa6ad2ae2a6ff46508e949dbfb9a71cf1059f" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.911563 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.952345 4898 scope.go:117] "RemoveContainer" containerID="639d6b1a7ebd8d21581df66c5f139e40ab8dbbed6d6eb0a97e140f37e5bc79f0" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.958137 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.975499 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.984848 4898 scope.go:117] "RemoveContainer" containerID="8114e96c1ebecd61606449497e1ddc6001d1c7581f4d826e0be8b8c8b7f074ae" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.985923 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:09:11 crc kubenswrapper[4898]: E0120 04:09:11.986602 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="ceilometer-central-agent" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.986637 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="ceilometer-central-agent" Jan 20 04:09:11 crc kubenswrapper[4898]: E0120 04:09:11.986674 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="proxy-httpd" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.986689 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="proxy-httpd" Jan 20 04:09:11 crc kubenswrapper[4898]: E0120 04:09:11.986730 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="ceilometer-notification-agent" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.986743 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="ceilometer-notification-agent" Jan 20 04:09:11 crc kubenswrapper[4898]: E0120 04:09:11.986766 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="sg-core" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.986778 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="sg-core" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.987176 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="proxy-httpd" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.987202 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="ceilometer-central-agent" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.987223 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="ceilometer-notification-agent" Jan 20 04:09:11 crc kubenswrapper[4898]: I0120 04:09:11.987242 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" containerName="sg-core" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.001505 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.009697 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.010015 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.010173 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.032550 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.045116 4898 scope.go:117] "RemoveContainer" containerID="bf36dac6edad60db588e62c6eae56a0a220f4e1fae109df75d31b0efa76b5e24" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.127690 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.127828 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-run-httpd\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.127875 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.127916 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-scripts\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.128328 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-log-httpd\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.128645 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hpjt\" (UniqueName: \"kubernetes.io/projected/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-kube-api-access-5hpjt\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.128746 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.128787 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-config-data\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.230212 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hpjt\" (UniqueName: \"kubernetes.io/projected/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-kube-api-access-5hpjt\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.230275 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.230306 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-config-data\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.230412 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.230514 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-run-httpd\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.230554 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.230610 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-scripts\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.230695 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-log-httpd\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.231423 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-log-httpd\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.231515 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-run-httpd\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.238110 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.238613 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-config-data\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.239821 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.241249 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-scripts\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.243975 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.250820 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hpjt\" (UniqueName: \"kubernetes.io/projected/8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0-kube-api-access-5hpjt\") pod \"ceilometer-0\" (UID: \"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0\") " pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.349181 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.862570 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 04:09:12 crc kubenswrapper[4898]: W0120 04:09:12.871142 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fd30d57_bc56_4a2f_aba3_b9208fc1b0a0.slice/crio-459b6e017940f744d03669dbfda98f98b6dcc70ea0122f5fe7d1e71cabbd8cae WatchSource:0}: Error finding container 459b6e017940f744d03669dbfda98f98b6dcc70ea0122f5fe7d1e71cabbd8cae: Status 404 returned error can't find the container with id 459b6e017940f744d03669dbfda98f98b6dcc70ea0122f5fe7d1e71cabbd8cae Jan 20 04:09:12 crc kubenswrapper[4898]: I0120 04:09:12.925763 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0","Type":"ContainerStarted","Data":"459b6e017940f744d03669dbfda98f98b6dcc70ea0122f5fe7d1e71cabbd8cae"} Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.602556 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.665767 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8kw\" (UniqueName: \"kubernetes.io/projected/f8d0a764-e953-4e39-a363-134813e0fbc6-kube-api-access-ws8kw\") pod \"f8d0a764-e953-4e39-a363-134813e0fbc6\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.665846 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d0a764-e953-4e39-a363-134813e0fbc6-logs\") pod \"f8d0a764-e953-4e39-a363-134813e0fbc6\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.665900 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-combined-ca-bundle\") pod \"f8d0a764-e953-4e39-a363-134813e0fbc6\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.665987 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-config-data\") pod \"f8d0a764-e953-4e39-a363-134813e0fbc6\" (UID: \"f8d0a764-e953-4e39-a363-134813e0fbc6\") " Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.667842 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8d0a764-e953-4e39-a363-134813e0fbc6-logs" (OuterVolumeSpecName: "logs") pod "f8d0a764-e953-4e39-a363-134813e0fbc6" (UID: "f8d0a764-e953-4e39-a363-134813e0fbc6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.677879 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d0a764-e953-4e39-a363-134813e0fbc6-kube-api-access-ws8kw" (OuterVolumeSpecName: "kube-api-access-ws8kw") pod "f8d0a764-e953-4e39-a363-134813e0fbc6" (UID: "f8d0a764-e953-4e39-a363-134813e0fbc6"). InnerVolumeSpecName "kube-api-access-ws8kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.702550 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-config-data" (OuterVolumeSpecName: "config-data") pod "f8d0a764-e953-4e39-a363-134813e0fbc6" (UID: "f8d0a764-e953-4e39-a363-134813e0fbc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.712143 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8d0a764-e953-4e39-a363-134813e0fbc6" (UID: "f8d0a764-e953-4e39-a363-134813e0fbc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.739794 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1fd031d-4a12-4968-aedd-fca53d682a02" path="/var/lib/kubelet/pods/c1fd031d-4a12-4968-aedd-fca53d682a02/volumes" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.768919 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.768956 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws8kw\" (UniqueName: \"kubernetes.io/projected/f8d0a764-e953-4e39-a363-134813e0fbc6-kube-api-access-ws8kw\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.768967 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d0a764-e953-4e39-a363-134813e0fbc6-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.768976 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d0a764-e953-4e39-a363-134813e0fbc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.940585 4898 generic.go:334] "Generic (PLEG): container finished" podID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerID="72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0" exitCode=0 Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.940661 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.940689 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8d0a764-e953-4e39-a363-134813e0fbc6","Type":"ContainerDied","Data":"72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0"} Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.941380 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8d0a764-e953-4e39-a363-134813e0fbc6","Type":"ContainerDied","Data":"42e115cdac8523ea06c622ffa25eb73364e8a91aa1d27e1315fb809e276f265d"} Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.941409 4898 scope.go:117] "RemoveContainer" containerID="72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.942864 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0","Type":"ContainerStarted","Data":"548b3a9249478722aac94ca246d130c46d155151d34a8765f44ad17e7828e2cf"} Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.968507 4898 scope.go:117] "RemoveContainer" containerID="1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625" Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.973239 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:13 crc kubenswrapper[4898]: I0120 04:09:13.983077 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.002214 4898 scope.go:117] "RemoveContainer" containerID="72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0" Jan 20 04:09:14 crc kubenswrapper[4898]: E0120 04:09:14.003452 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0\": container with ID starting with 72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0 not found: ID does not exist" containerID="72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.003499 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0"} err="failed to get container status \"72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0\": rpc error: code = NotFound desc = could not find container \"72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0\": container with ID starting with 72dd1242808c65ec0c0765593914796570b33590154bed388355fff76eec09f0 not found: ID does not exist" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.003529 4898 scope.go:117] "RemoveContainer" containerID="1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625" Jan 20 04:09:14 crc kubenswrapper[4898]: E0120 04:09:14.004030 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625\": container with ID starting with 1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625 not found: ID does not exist" containerID="1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.004077 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625"} err="failed to get container status \"1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625\": rpc error: code = NotFound desc = could not find container \"1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625\": container with ID starting with 1c9b4dd649b38d6b360542febf189028d8d642f6b308984283da11e4c0a80625 not found: ID does not exist" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.044508 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:14 crc kubenswrapper[4898]: E0120 04:09:14.045000 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerName="nova-api-api" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.045023 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerName="nova-api-api" Jan 20 04:09:14 crc kubenswrapper[4898]: E0120 04:09:14.045051 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerName="nova-api-log" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.045061 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerName="nova-api-log" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.045246 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerName="nova-api-api" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.045278 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d0a764-e953-4e39-a363-134813e0fbc6" containerName="nova-api-log" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.046488 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.051204 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.051426 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.051583 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.067600 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.190449 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-logs\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.190689 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-public-tls-certs\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.190809 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hkpd\" (UniqueName: \"kubernetes.io/projected/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-kube-api-access-5hkpd\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.190883 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-config-data\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.190947 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.191104 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.292918 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-config-data\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.292975 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.293044 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.293092 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-logs\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.293107 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-public-tls-certs\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.293145 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hkpd\" (UniqueName: \"kubernetes.io/projected/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-kube-api-access-5hkpd\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.295942 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-logs\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.297132 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.297461 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-public-tls-certs\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.298316 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-config-data\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.307575 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.311554 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hkpd\" (UniqueName: \"kubernetes.io/projected/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-kube-api-access-5hkpd\") pod \"nova-api-0\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.378470 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:09:14 crc kubenswrapper[4898]: W0120 04:09:14.849896 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda53b45e0_5b4f_4094_ae2a_2b517b5ad672.slice/crio-579aac4ef0ce9cc9027d8672e0397fc5658077addad14f81ce394dee37acb950 WatchSource:0}: Error finding container 579aac4ef0ce9cc9027d8672e0397fc5658077addad14f81ce394dee37acb950: Status 404 returned error can't find the container with id 579aac4ef0ce9cc9027d8672e0397fc5658077addad14f81ce394dee37acb950 Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.851488 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.957222 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a53b45e0-5b4f-4094-ae2a-2b517b5ad672","Type":"ContainerStarted","Data":"579aac4ef0ce9cc9027d8672e0397fc5658077addad14f81ce394dee37acb950"} Jan 20 04:09:14 crc kubenswrapper[4898]: I0120 04:09:14.960035 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0","Type":"ContainerStarted","Data":"c0882c231f87491fd0df01c9cc601e0e0b575179219386be9fb57960ef374a86"} Jan 20 04:09:15 crc kubenswrapper[4898]: I0120 04:09:15.222513 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:15 crc kubenswrapper[4898]: I0120 04:09:15.247178 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:15 crc kubenswrapper[4898]: I0120 04:09:15.732617 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d0a764-e953-4e39-a363-134813e0fbc6" path="/var/lib/kubelet/pods/f8d0a764-e953-4e39-a363-134813e0fbc6/volumes" Jan 20 04:09:15 crc kubenswrapper[4898]: I0120 04:09:15.981712 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a53b45e0-5b4f-4094-ae2a-2b517b5ad672","Type":"ContainerStarted","Data":"15689b906c05f1a3f8b5611821edd2a575a7218356441d825579f1f287c45296"} Jan 20 04:09:15 crc kubenswrapper[4898]: I0120 04:09:15.981760 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a53b45e0-5b4f-4094-ae2a-2b517b5ad672","Type":"ContainerStarted","Data":"1920bf32fa63fea4c9fa95e99dc236ecb730887a664fcc84b3456af3eca12ac0"} Jan 20 04:09:15 crc kubenswrapper[4898]: I0120 04:09:15.988769 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0","Type":"ContainerStarted","Data":"07d13c08dbefccfea366f236888b9445344cb84955ae4a9256e824951af3a9b9"} Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.003386 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.003365086 podStartE2EDuration="3.003365086s" podCreationTimestamp="2026-01-20 04:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:09:16.001983931 +0000 UTC m=+1202.601771790" watchObservedRunningTime="2026-01-20 04:09:16.003365086 +0000 UTC m=+1202.603152945" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.020351 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.236562 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-2s9t2"] Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.238177 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.242821 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.243010 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.264282 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2s9t2"] Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.342091 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-config-data\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.342164 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.342189 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t7qq\" (UniqueName: \"kubernetes.io/projected/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-kube-api-access-4t7qq\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.342217 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-scripts\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.444247 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-config-data\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.444329 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.444359 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t7qq\" (UniqueName: \"kubernetes.io/projected/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-kube-api-access-4t7qq\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.444386 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-scripts\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.451325 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-scripts\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.451346 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-config-data\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.452947 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.461377 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t7qq\" (UniqueName: \"kubernetes.io/projected/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-kube-api-access-4t7qq\") pod \"nova-cell1-cell-mapping-2s9t2\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:16 crc kubenswrapper[4898]: I0120 04:09:16.565946 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:17 crc kubenswrapper[4898]: I0120 04:09:17.081720 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2s9t2"] Jan 20 04:09:17 crc kubenswrapper[4898]: I0120 04:09:17.420709 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-9cg5d" Jan 20 04:09:17 crc kubenswrapper[4898]: I0120 04:09:17.503636 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsktj"] Jan 20 04:09:17 crc kubenswrapper[4898]: I0120 04:09:17.505772 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" podUID="77b4083e-c020-4b2b-8cac-cfb81dd3718c" containerName="dnsmasq-dns" containerID="cri-o://244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070" gracePeriod=10 Jan 20 04:09:17 crc kubenswrapper[4898]: I0120 04:09:17.916517 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:09:17 crc kubenswrapper[4898]: I0120 04:09:17.971162 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-nb\") pod \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " Jan 20 04:09:17 crc kubenswrapper[4898]: I0120 04:09:17.971200 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxq8x\" (UniqueName: \"kubernetes.io/projected/77b4083e-c020-4b2b-8cac-cfb81dd3718c-kube-api-access-nxq8x\") pod \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " Jan 20 04:09:17 crc kubenswrapper[4898]: I0120 04:09:17.971239 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-svc\") pod \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " Jan 20 04:09:17 crc kubenswrapper[4898]: I0120 04:09:17.971270 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-config\") pod \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " Jan 20 04:09:17 crc kubenswrapper[4898]: I0120 04:09:17.971285 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-sb\") pod \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " Jan 20 04:09:17 crc kubenswrapper[4898]: I0120 04:09:17.971351 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-swift-storage-0\") pod \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\" (UID: \"77b4083e-c020-4b2b-8cac-cfb81dd3718c\") " Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:17.988764 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b4083e-c020-4b2b-8cac-cfb81dd3718c-kube-api-access-nxq8x" (OuterVolumeSpecName: "kube-api-access-nxq8x") pod "77b4083e-c020-4b2b-8cac-cfb81dd3718c" (UID: "77b4083e-c020-4b2b-8cac-cfb81dd3718c"). InnerVolumeSpecName "kube-api-access-nxq8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.017570 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2s9t2" event={"ID":"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff","Type":"ContainerStarted","Data":"d837675c0c7561d80a191261b45b9dd86f0a334435bf491fae2e39e942fa3b6c"} Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.017630 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2s9t2" event={"ID":"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff","Type":"ContainerStarted","Data":"2ad0a28c795b63fc190346a6c37f17c110b44147e711570e50d31590f1b377a9"} Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.030421 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0","Type":"ContainerStarted","Data":"cec50adb31bb5f79cc09d9dce3c2de0c16be62acd459e19fd78449532ee2a1da"} Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.031541 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.035582 4898 generic.go:334] "Generic (PLEG): container finished" podID="77b4083e-c020-4b2b-8cac-cfb81dd3718c" containerID="244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070" exitCode=0 Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.035644 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" event={"ID":"77b4083e-c020-4b2b-8cac-cfb81dd3718c","Type":"ContainerDied","Data":"244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070"} Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.035676 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" event={"ID":"77b4083e-c020-4b2b-8cac-cfb81dd3718c","Type":"ContainerDied","Data":"5ca30a41d2d4207df3b05d6cc107c27a79f06028afb7f2b86c91fe5d5865eebe"} Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.035696 4898 scope.go:117] "RemoveContainer" containerID="244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.043581 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsktj" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.046711 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-2s9t2" podStartSLOduration=2.046694231 podStartE2EDuration="2.046694231s" podCreationTimestamp="2026-01-20 04:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:09:18.03971905 +0000 UTC m=+1204.639506909" watchObservedRunningTime="2026-01-20 04:09:18.046694231 +0000 UTC m=+1204.646482090" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.056643 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "77b4083e-c020-4b2b-8cac-cfb81dd3718c" (UID: "77b4083e-c020-4b2b-8cac-cfb81dd3718c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.058989 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "77b4083e-c020-4b2b-8cac-cfb81dd3718c" (UID: "77b4083e-c020-4b2b-8cac-cfb81dd3718c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.068801 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "77b4083e-c020-4b2b-8cac-cfb81dd3718c" (UID: "77b4083e-c020-4b2b-8cac-cfb81dd3718c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.074680 4898 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.075450 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.075589 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxq8x\" (UniqueName: \"kubernetes.io/projected/77b4083e-c020-4b2b-8cac-cfb81dd3718c-kube-api-access-nxq8x\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.075675 4898 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.080811 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.462163139 podStartE2EDuration="7.080797653s" podCreationTimestamp="2026-01-20 04:09:11 +0000 UTC" firstStartedPulling="2026-01-20 04:09:12.874579089 +0000 UTC m=+1199.474366988" lastFinishedPulling="2026-01-20 04:09:16.493213643 +0000 UTC m=+1203.093001502" observedRunningTime="2026-01-20 04:09:18.063802724 +0000 UTC m=+1204.663590593" watchObservedRunningTime="2026-01-20 04:09:18.080797653 +0000 UTC m=+1204.680585522" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.090886 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "77b4083e-c020-4b2b-8cac-cfb81dd3718c" (UID: "77b4083e-c020-4b2b-8cac-cfb81dd3718c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.098395 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-config" (OuterVolumeSpecName: "config") pod "77b4083e-c020-4b2b-8cac-cfb81dd3718c" (UID: "77b4083e-c020-4b2b-8cac-cfb81dd3718c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.178033 4898 scope.go:117] "RemoveContainer" containerID="6131c1a75e24e478a2b4b04b41d3a29fe082005610602f731eb726ffd6cd4bb9" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.178092 4898 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-config\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.178113 4898 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77b4083e-c020-4b2b-8cac-cfb81dd3718c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.200260 4898 scope.go:117] "RemoveContainer" containerID="244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070" Jan 20 04:09:18 crc kubenswrapper[4898]: E0120 04:09:18.200710 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070\": container with ID starting with 244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070 not found: ID does not exist" containerID="244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.200754 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070"} err="failed to get container status \"244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070\": rpc error: code = NotFound desc = could not find container \"244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070\": container with ID starting with 244032a3fc2492a3e47e236c0ac745f7b7d38446b0c90ed78378bdcb08605070 not found: ID does not exist" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.200782 4898 scope.go:117] "RemoveContainer" containerID="6131c1a75e24e478a2b4b04b41d3a29fe082005610602f731eb726ffd6cd4bb9" Jan 20 04:09:18 crc kubenswrapper[4898]: E0120 04:09:18.201131 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6131c1a75e24e478a2b4b04b41d3a29fe082005610602f731eb726ffd6cd4bb9\": container with ID starting with 6131c1a75e24e478a2b4b04b41d3a29fe082005610602f731eb726ffd6cd4bb9 not found: ID does not exist" containerID="6131c1a75e24e478a2b4b04b41d3a29fe082005610602f731eb726ffd6cd4bb9" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.201164 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6131c1a75e24e478a2b4b04b41d3a29fe082005610602f731eb726ffd6cd4bb9"} err="failed to get container status \"6131c1a75e24e478a2b4b04b41d3a29fe082005610602f731eb726ffd6cd4bb9\": rpc error: code = NotFound desc = could not find container \"6131c1a75e24e478a2b4b04b41d3a29fe082005610602f731eb726ffd6cd4bb9\": container with ID starting with 6131c1a75e24e478a2b4b04b41d3a29fe082005610602f731eb726ffd6cd4bb9 not found: ID does not exist" Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.372342 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsktj"] Jan 20 04:09:18 crc kubenswrapper[4898]: I0120 04:09:18.379875 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsktj"] Jan 20 04:09:19 crc kubenswrapper[4898]: E0120 04:09:19.697164 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice/crio-42e115cdac8523ea06c622ffa25eb73364e8a91aa1d27e1315fb809e276f265d\": RecentStats: unable to find data in memory cache]" Jan 20 04:09:19 crc kubenswrapper[4898]: I0120 04:09:19.750794 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b4083e-c020-4b2b-8cac-cfb81dd3718c" path="/var/lib/kubelet/pods/77b4083e-c020-4b2b-8cac-cfb81dd3718c/volumes" Jan 20 04:09:22 crc kubenswrapper[4898]: I0120 04:09:22.081773 4898 generic.go:334] "Generic (PLEG): container finished" podID="2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff" containerID="d837675c0c7561d80a191261b45b9dd86f0a334435bf491fae2e39e942fa3b6c" exitCode=0 Jan 20 04:09:22 crc kubenswrapper[4898]: I0120 04:09:22.081823 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2s9t2" event={"ID":"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff","Type":"ContainerDied","Data":"d837675c0c7561d80a191261b45b9dd86f0a334435bf491fae2e39e942fa3b6c"} Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.624948 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.791700 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-combined-ca-bundle\") pod \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.791748 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-config-data\") pod \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.791809 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t7qq\" (UniqueName: \"kubernetes.io/projected/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-kube-api-access-4t7qq\") pod \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.791911 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-scripts\") pod \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\" (UID: \"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff\") " Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.797581 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-kube-api-access-4t7qq" (OuterVolumeSpecName: "kube-api-access-4t7qq") pod "2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff" (UID: "2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff"). InnerVolumeSpecName "kube-api-access-4t7qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.802672 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-scripts" (OuterVolumeSpecName: "scripts") pod "2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff" (UID: "2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.832520 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-config-data" (OuterVolumeSpecName: "config-data") pod "2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff" (UID: "2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.836714 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff" (UID: "2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.894961 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.895376 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.895397 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t7qq\" (UniqueName: \"kubernetes.io/projected/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-kube-api-access-4t7qq\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:23 crc kubenswrapper[4898]: I0120 04:09:23.895418 4898 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:24 crc kubenswrapper[4898]: I0120 04:09:24.112231 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2s9t2" event={"ID":"2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff","Type":"ContainerDied","Data":"2ad0a28c795b63fc190346a6c37f17c110b44147e711570e50d31590f1b377a9"} Jan 20 04:09:24 crc kubenswrapper[4898]: I0120 04:09:24.112318 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ad0a28c795b63fc190346a6c37f17c110b44147e711570e50d31590f1b377a9" Jan 20 04:09:24 crc kubenswrapper[4898]: I0120 04:09:24.112428 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2s9t2" Jan 20 04:09:24 crc kubenswrapper[4898]: I0120 04:09:24.304770 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:24 crc kubenswrapper[4898]: I0120 04:09:24.305287 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a53b45e0-5b4f-4094-ae2a-2b517b5ad672" containerName="nova-api-api" containerID="cri-o://15689b906c05f1a3f8b5611821edd2a575a7218356441d825579f1f287c45296" gracePeriod=30 Jan 20 04:09:24 crc kubenswrapper[4898]: I0120 04:09:24.305043 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a53b45e0-5b4f-4094-ae2a-2b517b5ad672" containerName="nova-api-log" containerID="cri-o://1920bf32fa63fea4c9fa95e99dc236ecb730887a664fcc84b3456af3eca12ac0" gracePeriod=30 Jan 20 04:09:24 crc kubenswrapper[4898]: I0120 04:09:24.338400 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:09:24 crc kubenswrapper[4898]: I0120 04:09:24.338735 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6b38ae07-5732-4b51-96cc-4981a61b4ade" containerName="nova-scheduler-scheduler" containerID="cri-o://6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d" gracePeriod=30 Jan 20 04:09:24 crc kubenswrapper[4898]: I0120 04:09:24.368389 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:09:24 crc kubenswrapper[4898]: I0120 04:09:24.368696 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerName="nova-metadata-log" containerID="cri-o://f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84" gracePeriod=30 Jan 20 04:09:24 crc kubenswrapper[4898]: I0120 04:09:24.368837 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerName="nova-metadata-metadata" containerID="cri-o://7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed" gracePeriod=30 Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.129325 4898 generic.go:334] "Generic (PLEG): container finished" podID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerID="f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84" exitCode=143 Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.129409 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61","Type":"ContainerDied","Data":"f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84"} Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.132215 4898 generic.go:334] "Generic (PLEG): container finished" podID="a53b45e0-5b4f-4094-ae2a-2b517b5ad672" containerID="15689b906c05f1a3f8b5611821edd2a575a7218356441d825579f1f287c45296" exitCode=0 Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.132244 4898 generic.go:334] "Generic (PLEG): container finished" podID="a53b45e0-5b4f-4094-ae2a-2b517b5ad672" containerID="1920bf32fa63fea4c9fa95e99dc236ecb730887a664fcc84b3456af3eca12ac0" exitCode=143 Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.132259 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a53b45e0-5b4f-4094-ae2a-2b517b5ad672","Type":"ContainerDied","Data":"15689b906c05f1a3f8b5611821edd2a575a7218356441d825579f1f287c45296"} Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.132343 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a53b45e0-5b4f-4094-ae2a-2b517b5ad672","Type":"ContainerDied","Data":"1920bf32fa63fea4c9fa95e99dc236ecb730887a664fcc84b3456af3eca12ac0"} Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.340045 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.462502 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-config-data\") pod \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.462607 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-internal-tls-certs\") pod \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.462697 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-combined-ca-bundle\") pod \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.462834 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-logs\") pod \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.462889 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-public-tls-certs\") pod \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.462949 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hkpd\" (UniqueName: \"kubernetes.io/projected/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-kube-api-access-5hkpd\") pod \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\" (UID: \"a53b45e0-5b4f-4094-ae2a-2b517b5ad672\") " Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.463383 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-logs" (OuterVolumeSpecName: "logs") pod "a53b45e0-5b4f-4094-ae2a-2b517b5ad672" (UID: "a53b45e0-5b4f-4094-ae2a-2b517b5ad672"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.468640 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-kube-api-access-5hkpd" (OuterVolumeSpecName: "kube-api-access-5hkpd") pod "a53b45e0-5b4f-4094-ae2a-2b517b5ad672" (UID: "a53b45e0-5b4f-4094-ae2a-2b517b5ad672"). InnerVolumeSpecName "kube-api-access-5hkpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.488672 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-config-data" (OuterVolumeSpecName: "config-data") pod "a53b45e0-5b4f-4094-ae2a-2b517b5ad672" (UID: "a53b45e0-5b4f-4094-ae2a-2b517b5ad672"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.498925 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a53b45e0-5b4f-4094-ae2a-2b517b5ad672" (UID: "a53b45e0-5b4f-4094-ae2a-2b517b5ad672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.512874 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a53b45e0-5b4f-4094-ae2a-2b517b5ad672" (UID: "a53b45e0-5b4f-4094-ae2a-2b517b5ad672"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.524538 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a53b45e0-5b4f-4094-ae2a-2b517b5ad672" (UID: "a53b45e0-5b4f-4094-ae2a-2b517b5ad672"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.564633 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hkpd\" (UniqueName: \"kubernetes.io/projected/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-kube-api-access-5hkpd\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.565027 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.565043 4898 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.565052 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.565066 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:25 crc kubenswrapper[4898]: I0120 04:09:25.565108 4898 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a53b45e0-5b4f-4094-ae2a-2b517b5ad672-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:26 crc kubenswrapper[4898]: E0120 04:09:26.033654 4898 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 04:09:26 crc kubenswrapper[4898]: E0120 04:09:26.035399 4898 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 04:09:26 crc kubenswrapper[4898]: E0120 04:09:26.036746 4898 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 04:09:26 crc kubenswrapper[4898]: E0120 04:09:26.036794 4898 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6b38ae07-5732-4b51-96cc-4981a61b4ade" containerName="nova-scheduler-scheduler" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.143803 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a53b45e0-5b4f-4094-ae2a-2b517b5ad672","Type":"ContainerDied","Data":"579aac4ef0ce9cc9027d8672e0397fc5658077addad14f81ce394dee37acb950"} Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.143853 4898 scope.go:117] "RemoveContainer" containerID="15689b906c05f1a3f8b5611821edd2a575a7218356441d825579f1f287c45296" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.143857 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.176590 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.196499 4898 scope.go:117] "RemoveContainer" containerID="1920bf32fa63fea4c9fa95e99dc236ecb730887a664fcc84b3456af3eca12ac0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.200375 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.218743 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:26 crc kubenswrapper[4898]: E0120 04:09:26.219210 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53b45e0-5b4f-4094-ae2a-2b517b5ad672" containerName="nova-api-api" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.219224 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53b45e0-5b4f-4094-ae2a-2b517b5ad672" containerName="nova-api-api" Jan 20 04:09:26 crc kubenswrapper[4898]: E0120 04:09:26.219261 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53b45e0-5b4f-4094-ae2a-2b517b5ad672" containerName="nova-api-log" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.219282 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53b45e0-5b4f-4094-ae2a-2b517b5ad672" containerName="nova-api-log" Jan 20 04:09:26 crc kubenswrapper[4898]: E0120 04:09:26.219298 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b4083e-c020-4b2b-8cac-cfb81dd3718c" containerName="init" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.219306 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b4083e-c020-4b2b-8cac-cfb81dd3718c" containerName="init" Jan 20 04:09:26 crc kubenswrapper[4898]: E0120 04:09:26.219323 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff" containerName="nova-manage" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.219331 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff" containerName="nova-manage" Jan 20 04:09:26 crc kubenswrapper[4898]: E0120 04:09:26.219356 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b4083e-c020-4b2b-8cac-cfb81dd3718c" containerName="dnsmasq-dns" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.219366 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b4083e-c020-4b2b-8cac-cfb81dd3718c" containerName="dnsmasq-dns" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.220125 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b4083e-c020-4b2b-8cac-cfb81dd3718c" containerName="dnsmasq-dns" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.220153 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="a53b45e0-5b4f-4094-ae2a-2b517b5ad672" containerName="nova-api-log" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.220179 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="a53b45e0-5b4f-4094-ae2a-2b517b5ad672" containerName="nova-api-api" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.220193 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff" containerName="nova-manage" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.222638 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.224955 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.231008 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.231064 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.232672 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.380337 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.380691 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-config-data\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.380717 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.380741 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7h4l\" (UniqueName: \"kubernetes.io/projected/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-kube-api-access-d7h4l\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.380902 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-logs\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.380977 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.482885 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-logs\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.483274 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.483608 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.483342 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-logs\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.484198 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-config-data\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.484388 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.484665 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7h4l\" (UniqueName: \"kubernetes.io/projected/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-kube-api-access-d7h4l\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.488646 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-config-data\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.489007 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.489019 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.490085 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.502030 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7h4l\" (UniqueName: \"kubernetes.io/projected/d8b34b96-1828-4d70-82bd-3cc6c02f76a9-kube-api-access-d7h4l\") pod \"nova-api-0\" (UID: \"d8b34b96-1828-4d70-82bd-3cc6c02f76a9\") " pod="openstack/nova-api-0" Jan 20 04:09:26 crc kubenswrapper[4898]: I0120 04:09:26.557245 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 04:09:27 crc kubenswrapper[4898]: I0120 04:09:27.004421 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 04:09:27 crc kubenswrapper[4898]: I0120 04:09:27.157425 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8b34b96-1828-4d70-82bd-3cc6c02f76a9","Type":"ContainerStarted","Data":"eab85f87661e2d60d9d2c6214faf726b881279ce3d9d3aed3dea4e5a5adce6a9"} Jan 20 04:09:27 crc kubenswrapper[4898]: I0120 04:09:27.730886 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a53b45e0-5b4f-4094-ae2a-2b517b5ad672" path="/var/lib/kubelet/pods/a53b45e0-5b4f-4094-ae2a-2b517b5ad672/volumes" Jan 20 04:09:27 crc kubenswrapper[4898]: I0120 04:09:27.963807 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.119203 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzk8h\" (UniqueName: \"kubernetes.io/projected/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-kube-api-access-nzk8h\") pod \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.119277 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-combined-ca-bundle\") pod \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.119380 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-config-data\") pod \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.119505 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-nova-metadata-tls-certs\") pod \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.119636 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-logs\") pod \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\" (UID: \"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61\") " Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.120117 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-logs" (OuterVolumeSpecName: "logs") pod "68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" (UID: "68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.120303 4898 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-logs\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.130093 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-kube-api-access-nzk8h" (OuterVolumeSpecName: "kube-api-access-nzk8h") pod "68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" (UID: "68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61"). InnerVolumeSpecName "kube-api-access-nzk8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.145989 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" (UID: "68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.154304 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-config-data" (OuterVolumeSpecName: "config-data") pod "68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" (UID: "68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.168287 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8b34b96-1828-4d70-82bd-3cc6c02f76a9","Type":"ContainerStarted","Data":"cf15c651ce3125ac693c5117f7e19ccc2919dcc4dc9bafe07195169b6fea9098"} Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.168331 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8b34b96-1828-4d70-82bd-3cc6c02f76a9","Type":"ContainerStarted","Data":"7a13623f891504918ec9f448f4552b7e1d1e8be909a9f90960daf221b3995b2c"} Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.171409 4898 generic.go:334] "Generic (PLEG): container finished" podID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerID="7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed" exitCode=0 Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.171613 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61","Type":"ContainerDied","Data":"7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed"} Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.171690 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61","Type":"ContainerDied","Data":"3b2bb71cd3fc2c5b6deeb8ed690121d77c2f0425e4cca39323558d6feb5f83be"} Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.171767 4898 scope.go:117] "RemoveContainer" containerID="7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.171925 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.184044 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" (UID: "68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.192410 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.192386404 podStartE2EDuration="2.192386404s" podCreationTimestamp="2026-01-20 04:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:09:28.187064565 +0000 UTC m=+1214.786852434" watchObservedRunningTime="2026-01-20 04:09:28.192386404 +0000 UTC m=+1214.792174263" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.202303 4898 scope.go:117] "RemoveContainer" containerID="f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.220227 4898 scope.go:117] "RemoveContainer" containerID="7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed" Jan 20 04:09:28 crc kubenswrapper[4898]: E0120 04:09:28.221064 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed\": container with ID starting with 7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed not found: ID does not exist" containerID="7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.221099 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed"} err="failed to get container status \"7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed\": rpc error: code = NotFound desc = could not find container \"7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed\": container with ID starting with 7086b8824029778e444b061b9eb9607e8f0fa7d4415cc59fb2a64214264016ed not found: ID does not exist" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.221119 4898 scope.go:117] "RemoveContainer" containerID="f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84" Jan 20 04:09:28 crc kubenswrapper[4898]: E0120 04:09:28.221361 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84\": container with ID starting with f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84 not found: ID does not exist" containerID="f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.221383 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84"} err="failed to get container status \"f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84\": rpc error: code = NotFound desc = could not find container \"f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84\": container with ID starting with f2e18110e597f11a12ed57a338ee81f1f27b8311ebbce31a7ef52f27e78f9f84 not found: ID does not exist" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.221596 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzk8h\" (UniqueName: \"kubernetes.io/projected/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-kube-api-access-nzk8h\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.221620 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.221629 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.221638 4898 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.505710 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.521294 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.532582 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:09:28 crc kubenswrapper[4898]: E0120 04:09:28.532997 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerName="nova-metadata-metadata" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.533016 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerName="nova-metadata-metadata" Jan 20 04:09:28 crc kubenswrapper[4898]: E0120 04:09:28.533027 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerName="nova-metadata-log" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.533033 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerName="nova-metadata-log" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.533224 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerName="nova-metadata-metadata" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.533236 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" containerName="nova-metadata-log" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.534186 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.555371 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.556386 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.556667 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.729306 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a46086c-f810-4cb5-aef8-8d12bb3d292f-config-data\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.729424 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a46086c-f810-4cb5-aef8-8d12bb3d292f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.729972 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l525\" (UniqueName: \"kubernetes.io/projected/0a46086c-f810-4cb5-aef8-8d12bb3d292f-kube-api-access-2l525\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.730503 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a46086c-f810-4cb5-aef8-8d12bb3d292f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.730565 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a46086c-f810-4cb5-aef8-8d12bb3d292f-logs\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.832216 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a46086c-f810-4cb5-aef8-8d12bb3d292f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.832290 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l525\" (UniqueName: \"kubernetes.io/projected/0a46086c-f810-4cb5-aef8-8d12bb3d292f-kube-api-access-2l525\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.832317 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a46086c-f810-4cb5-aef8-8d12bb3d292f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.832338 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a46086c-f810-4cb5-aef8-8d12bb3d292f-logs\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.832402 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a46086c-f810-4cb5-aef8-8d12bb3d292f-config-data\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.832839 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a46086c-f810-4cb5-aef8-8d12bb3d292f-logs\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.836160 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a46086c-f810-4cb5-aef8-8d12bb3d292f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.836398 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a46086c-f810-4cb5-aef8-8d12bb3d292f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.841414 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a46086c-f810-4cb5-aef8-8d12bb3d292f-config-data\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.856013 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l525\" (UniqueName: \"kubernetes.io/projected/0a46086c-f810-4cb5-aef8-8d12bb3d292f-kube-api-access-2l525\") pod \"nova-metadata-0\" (UID: \"0a46086c-f810-4cb5-aef8-8d12bb3d292f\") " pod="openstack/nova-metadata-0" Jan 20 04:09:28 crc kubenswrapper[4898]: I0120 04:09:28.963402 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 04:09:29 crc kubenswrapper[4898]: I0120 04:09:29.414216 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 04:09:29 crc kubenswrapper[4898]: W0120 04:09:29.423262 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a46086c_f810_4cb5_aef8_8d12bb3d292f.slice/crio-980f9ca9a8b97fe8e6947a34d6c4fcaad93d15cade357a5d80c042ea8eb208a6 WatchSource:0}: Error finding container 980f9ca9a8b97fe8e6947a34d6c4fcaad93d15cade357a5d80c042ea8eb208a6: Status 404 returned error can't find the container with id 980f9ca9a8b97fe8e6947a34d6c4fcaad93d15cade357a5d80c042ea8eb208a6 Jan 20 04:09:29 crc kubenswrapper[4898]: I0120 04:09:29.747030 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61" path="/var/lib/kubelet/pods/68e7bc92-efb7-4e9f-9f93-aefc0fc9cc61/volumes" Jan 20 04:09:29 crc kubenswrapper[4898]: E0120 04:09:29.957873 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice/crio-42e115cdac8523ea06c622ffa25eb73364e8a91aa1d27e1315fb809e276f265d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice\": RecentStats: unable to find data in memory cache]" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.074383 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.191864 4898 generic.go:334] "Generic (PLEG): container finished" podID="6b38ae07-5732-4b51-96cc-4981a61b4ade" containerID="6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d" exitCode=0 Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.191915 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b38ae07-5732-4b51-96cc-4981a61b4ade","Type":"ContainerDied","Data":"6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d"} Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.192190 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b38ae07-5732-4b51-96cc-4981a61b4ade","Type":"ContainerDied","Data":"ad041e8324814c05f4d5c98f8f59994882014b4122a90f343a750408db8cacdb"} Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.191924 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.192255 4898 scope.go:117] "RemoveContainer" containerID="6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.193841 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a46086c-f810-4cb5-aef8-8d12bb3d292f","Type":"ContainerStarted","Data":"765bc3d5846c5f16f790578034054cadf7c25d4c4e8ba222d4b68c93dd31a6a6"} Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.193881 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a46086c-f810-4cb5-aef8-8d12bb3d292f","Type":"ContainerStarted","Data":"cb950511e844f60442db363c5042cff90feb2873c436385925f901799d985dd5"} Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.193896 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a46086c-f810-4cb5-aef8-8d12bb3d292f","Type":"ContainerStarted","Data":"980f9ca9a8b97fe8e6947a34d6c4fcaad93d15cade357a5d80c042ea8eb208a6"} Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.210083 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.210068275 podStartE2EDuration="2.210068275s" podCreationTimestamp="2026-01-20 04:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:09:30.209671043 +0000 UTC m=+1216.809458902" watchObservedRunningTime="2026-01-20 04:09:30.210068275 +0000 UTC m=+1216.809856134" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.213447 4898 scope.go:117] "RemoveContainer" containerID="6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d" Jan 20 04:09:30 crc kubenswrapper[4898]: E0120 04:09:30.213907 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d\": container with ID starting with 6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d not found: ID does not exist" containerID="6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.213939 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d"} err="failed to get container status \"6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d\": rpc error: code = NotFound desc = could not find container \"6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d\": container with ID starting with 6a8054e5fac54dd422f4e3225fc6819366a5ddf9b16490c67ed41e2e9485a05d not found: ID does not exist" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.261782 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-config-data\") pod \"6b38ae07-5732-4b51-96cc-4981a61b4ade\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.261854 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-combined-ca-bundle\") pod \"6b38ae07-5732-4b51-96cc-4981a61b4ade\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.261921 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69vvc\" (UniqueName: \"kubernetes.io/projected/6b38ae07-5732-4b51-96cc-4981a61b4ade-kube-api-access-69vvc\") pod \"6b38ae07-5732-4b51-96cc-4981a61b4ade\" (UID: \"6b38ae07-5732-4b51-96cc-4981a61b4ade\") " Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.266517 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b38ae07-5732-4b51-96cc-4981a61b4ade-kube-api-access-69vvc" (OuterVolumeSpecName: "kube-api-access-69vvc") pod "6b38ae07-5732-4b51-96cc-4981a61b4ade" (UID: "6b38ae07-5732-4b51-96cc-4981a61b4ade"). InnerVolumeSpecName "kube-api-access-69vvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.285721 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-config-data" (OuterVolumeSpecName: "config-data") pod "6b38ae07-5732-4b51-96cc-4981a61b4ade" (UID: "6b38ae07-5732-4b51-96cc-4981a61b4ade"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.298349 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b38ae07-5732-4b51-96cc-4981a61b4ade" (UID: "6b38ae07-5732-4b51-96cc-4981a61b4ade"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.364061 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.364093 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b38ae07-5732-4b51-96cc-4981a61b4ade-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.364106 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69vvc\" (UniqueName: \"kubernetes.io/projected/6b38ae07-5732-4b51-96cc-4981a61b4ade-kube-api-access-69vvc\") on node \"crc\" DevicePath \"\"" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.524337 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.535471 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.550644 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:09:30 crc kubenswrapper[4898]: E0120 04:09:30.551072 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b38ae07-5732-4b51-96cc-4981a61b4ade" containerName="nova-scheduler-scheduler" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.551094 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b38ae07-5732-4b51-96cc-4981a61b4ade" containerName="nova-scheduler-scheduler" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.551304 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b38ae07-5732-4b51-96cc-4981a61b4ade" containerName="nova-scheduler-scheduler" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.552048 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.557612 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.582255 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.677885 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmlrn\" (UniqueName: \"kubernetes.io/projected/899ce7b8-38d0-4d36-8d05-7ee9fe0599b3-kube-api-access-gmlrn\") pod \"nova-scheduler-0\" (UID: \"899ce7b8-38d0-4d36-8d05-7ee9fe0599b3\") " pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.677971 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ce7b8-38d0-4d36-8d05-7ee9fe0599b3-config-data\") pod \"nova-scheduler-0\" (UID: \"899ce7b8-38d0-4d36-8d05-7ee9fe0599b3\") " pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.678005 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ce7b8-38d0-4d36-8d05-7ee9fe0599b3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"899ce7b8-38d0-4d36-8d05-7ee9fe0599b3\") " pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.779721 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmlrn\" (UniqueName: \"kubernetes.io/projected/899ce7b8-38d0-4d36-8d05-7ee9fe0599b3-kube-api-access-gmlrn\") pod \"nova-scheduler-0\" (UID: \"899ce7b8-38d0-4d36-8d05-7ee9fe0599b3\") " pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.779808 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ce7b8-38d0-4d36-8d05-7ee9fe0599b3-config-data\") pod \"nova-scheduler-0\" (UID: \"899ce7b8-38d0-4d36-8d05-7ee9fe0599b3\") " pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.779847 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ce7b8-38d0-4d36-8d05-7ee9fe0599b3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"899ce7b8-38d0-4d36-8d05-7ee9fe0599b3\") " pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.784187 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ce7b8-38d0-4d36-8d05-7ee9fe0599b3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"899ce7b8-38d0-4d36-8d05-7ee9fe0599b3\") " pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.786328 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ce7b8-38d0-4d36-8d05-7ee9fe0599b3-config-data\") pod \"nova-scheduler-0\" (UID: \"899ce7b8-38d0-4d36-8d05-7ee9fe0599b3\") " pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.797421 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmlrn\" (UniqueName: \"kubernetes.io/projected/899ce7b8-38d0-4d36-8d05-7ee9fe0599b3-kube-api-access-gmlrn\") pod \"nova-scheduler-0\" (UID: \"899ce7b8-38d0-4d36-8d05-7ee9fe0599b3\") " pod="openstack/nova-scheduler-0" Jan 20 04:09:30 crc kubenswrapper[4898]: I0120 04:09:30.930307 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 04:09:31 crc kubenswrapper[4898]: I0120 04:09:31.374595 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 04:09:31 crc kubenswrapper[4898]: I0120 04:09:31.731889 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b38ae07-5732-4b51-96cc-4981a61b4ade" path="/var/lib/kubelet/pods/6b38ae07-5732-4b51-96cc-4981a61b4ade/volumes" Jan 20 04:09:32 crc kubenswrapper[4898]: I0120 04:09:32.216174 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"899ce7b8-38d0-4d36-8d05-7ee9fe0599b3","Type":"ContainerStarted","Data":"cd527576d65cd0d1d6e17f464cefe6dac91a054cc3a67b4de94358954c8154e6"} Jan 20 04:09:32 crc kubenswrapper[4898]: I0120 04:09:32.216230 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"899ce7b8-38d0-4d36-8d05-7ee9fe0599b3","Type":"ContainerStarted","Data":"58a05ade4c6bde3e986d2d2907185856f5a5b3efb7817247310c96e72ddaf5d2"} Jan 20 04:09:32 crc kubenswrapper[4898]: I0120 04:09:32.238508 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.238490587 podStartE2EDuration="2.238490587s" podCreationTimestamp="2026-01-20 04:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:09:32.2373165 +0000 UTC m=+1218.837104379" watchObservedRunningTime="2026-01-20 04:09:32.238490587 +0000 UTC m=+1218.838278466" Jan 20 04:09:33 crc kubenswrapper[4898]: I0120 04:09:33.964253 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 04:09:33 crc kubenswrapper[4898]: I0120 04:09:33.964533 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 04:09:35 crc kubenswrapper[4898]: I0120 04:09:35.931387 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 04:09:36 crc kubenswrapper[4898]: I0120 04:09:36.558111 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 04:09:36 crc kubenswrapper[4898]: I0120 04:09:36.559627 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 04:09:37 crc kubenswrapper[4898]: I0120 04:09:37.577650 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8b34b96-1828-4d70-82bd-3cc6c02f76a9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 04:09:37 crc kubenswrapper[4898]: I0120 04:09:37.577771 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8b34b96-1828-4d70-82bd-3cc6c02f76a9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 04:09:38 crc kubenswrapper[4898]: I0120 04:09:38.964150 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 04:09:38 crc kubenswrapper[4898]: I0120 04:09:38.964503 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 04:09:39 crc kubenswrapper[4898]: I0120 04:09:39.981946 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0a46086c-f810-4cb5-aef8-8d12bb3d292f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 04:09:39 crc kubenswrapper[4898]: I0120 04:09:39.981956 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0a46086c-f810-4cb5-aef8-8d12bb3d292f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 04:09:39 crc kubenswrapper[4898]: I0120 04:09:39.982460 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:09:39 crc kubenswrapper[4898]: I0120 04:09:39.982540 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:09:40 crc kubenswrapper[4898]: E0120 04:09:40.248137 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice/crio-42e115cdac8523ea06c622ffa25eb73364e8a91aa1d27e1315fb809e276f265d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice\": RecentStats: unable to find data in memory cache]" Jan 20 04:09:40 crc kubenswrapper[4898]: I0120 04:09:40.930978 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 04:09:40 crc kubenswrapper[4898]: I0120 04:09:40.958704 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 04:09:41 crc kubenswrapper[4898]: I0120 04:09:41.328274 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 04:09:42 crc kubenswrapper[4898]: I0120 04:09:42.363691 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 04:09:46 crc kubenswrapper[4898]: I0120 04:09:46.564803 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 04:09:46 crc kubenswrapper[4898]: I0120 04:09:46.566648 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 04:09:46 crc kubenswrapper[4898]: I0120 04:09:46.567056 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 04:09:46 crc kubenswrapper[4898]: I0120 04:09:46.567103 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 04:09:46 crc kubenswrapper[4898]: I0120 04:09:46.577637 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 04:09:46 crc kubenswrapper[4898]: I0120 04:09:46.578815 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 04:09:48 crc kubenswrapper[4898]: I0120 04:09:48.969976 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 04:09:48 crc kubenswrapper[4898]: I0120 04:09:48.975537 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 04:09:48 crc kubenswrapper[4898]: I0120 04:09:48.976367 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 04:09:49 crc kubenswrapper[4898]: I0120 04:09:49.394146 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 04:09:50 crc kubenswrapper[4898]: E0120 04:09:50.495126 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice/crio-42e115cdac8523ea06c622ffa25eb73364e8a91aa1d27e1315fb809e276f265d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice\": RecentStats: unable to find data in memory cache]" Jan 20 04:10:00 crc kubenswrapper[4898]: E0120 04:10:00.714805 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice/crio-42e115cdac8523ea06c622ffa25eb73364e8a91aa1d27e1315fb809e276f265d\": RecentStats: unable to find data in memory cache]" Jan 20 04:10:09 crc kubenswrapper[4898]: I0120 04:10:09.976181 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:10:09 crc kubenswrapper[4898]: I0120 04:10:09.976884 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:10:09 crc kubenswrapper[4898]: I0120 04:10:09.976948 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 04:10:09 crc kubenswrapper[4898]: I0120 04:10:09.977881 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b7a7c76c50f5c70766a45b97b50c613bd67cf8335b24e271a6d0ca6195cc7af"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 04:10:09 crc kubenswrapper[4898]: I0120 04:10:09.977965 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://1b7a7c76c50f5c70766a45b97b50c613bd67cf8335b24e271a6d0ca6195cc7af" gracePeriod=600 Jan 20 04:10:10 crc kubenswrapper[4898]: I0120 04:10:10.605358 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="1b7a7c76c50f5c70766a45b97b50c613bd67cf8335b24e271a6d0ca6195cc7af" exitCode=0 Jan 20 04:10:10 crc kubenswrapper[4898]: I0120 04:10:10.605406 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"1b7a7c76c50f5c70766a45b97b50c613bd67cf8335b24e271a6d0ca6195cc7af"} Jan 20 04:10:10 crc kubenswrapper[4898]: I0120 04:10:10.605636 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"bad17c7f4ed87fcd8059992345ba30f1186ca3dfbfc22488f00e672024fef414"} Jan 20 04:10:10 crc kubenswrapper[4898]: I0120 04:10:10.605674 4898 scope.go:117] "RemoveContainer" containerID="e32845009c9a4455856ea9d28c879c6a89bdd3e67ca3cb7f9d9c98a468886eaa" Jan 20 04:10:10 crc kubenswrapper[4898]: E0120 04:10:10.968029 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0a764_e953_4e39_a363_134813e0fbc6.slice/crio-42e115cdac8523ea06c622ffa25eb73364e8a91aa1d27e1315fb809e276f265d\": RecentStats: unable to find data in memory cache]" Jan 20 04:11:15 crc kubenswrapper[4898]: I0120 04:11:15.136505 4898 scope.go:117] "RemoveContainer" containerID="26d97c32d0147d4d3b9b7fab46bb8da68c4025e476a428862d0422285502a92b" Jan 20 04:11:15 crc kubenswrapper[4898]: I0120 04:11:15.164838 4898 scope.go:117] "RemoveContainer" containerID="11bb3c93817ffacadd5d296f5a171ce68a89d32a4d1242479e10c9d58a5f230d" Jan 20 04:11:15 crc kubenswrapper[4898]: I0120 04:11:15.201611 4898 scope.go:117] "RemoveContainer" containerID="f040bd1d66b7fcaa2a6ac6c91c23d592f2c9f66a3eccc98068bdccde3b6dbeb8" Jan 20 04:11:15 crc kubenswrapper[4898]: I0120 04:11:15.265267 4898 scope.go:117] "RemoveContainer" containerID="3bb080dde22d1265b03a268f0c6dfd2ef5e0f6d15ebf39b8c6870afbae4eb796" Jan 20 04:11:15 crc kubenswrapper[4898]: I0120 04:11:15.287312 4898 scope.go:117] "RemoveContainer" containerID="ad7427bba363d2d5736a6d3f6c19c467f9bc04dc6882c55999358b9e02c58cfe" Jan 20 04:12:15 crc kubenswrapper[4898]: I0120 04:12:15.750587 4898 scope.go:117] "RemoveContainer" containerID="f033c4886f2eb053792196ca3c7a5123289faceb757765b43c450c6cae456f03" Jan 20 04:12:15 crc kubenswrapper[4898]: I0120 04:12:15.777706 4898 scope.go:117] "RemoveContainer" containerID="a831c779bc2247efb5547b5672f144a3ca5bf94b1ecc5b282c7b86a59f27148b" Jan 20 04:12:15 crc kubenswrapper[4898]: I0120 04:12:15.847102 4898 scope.go:117] "RemoveContainer" containerID="435c71510aebeddd4d19a7b21833135efc5e312faca852b46cd8a74af230ce5e" Jan 20 04:12:15 crc kubenswrapper[4898]: I0120 04:12:15.898831 4898 scope.go:117] "RemoveContainer" containerID="cb4199074583a7af2dbac646317eafaf23b3d964fb51addad2e8b286de7d3d3d" Jan 20 04:12:15 crc kubenswrapper[4898]: I0120 04:12:15.928946 4898 scope.go:117] "RemoveContainer" containerID="0e311235e68a95952ed865bf20148726b7b234604f71ba9b824e7a3f6e1a5588" Jan 20 04:12:39 crc kubenswrapper[4898]: I0120 04:12:39.977723 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:12:39 crc kubenswrapper[4898]: I0120 04:12:39.978526 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:13:09 crc kubenswrapper[4898]: I0120 04:13:09.976201 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:13:09 crc kubenswrapper[4898]: I0120 04:13:09.976703 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:13:39 crc kubenswrapper[4898]: I0120 04:13:39.975991 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:13:39 crc kubenswrapper[4898]: I0120 04:13:39.976759 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:13:39 crc kubenswrapper[4898]: I0120 04:13:39.976817 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 04:13:39 crc kubenswrapper[4898]: I0120 04:13:39.977627 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bad17c7f4ed87fcd8059992345ba30f1186ca3dfbfc22488f00e672024fef414"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 04:13:39 crc kubenswrapper[4898]: I0120 04:13:39.977694 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://bad17c7f4ed87fcd8059992345ba30f1186ca3dfbfc22488f00e672024fef414" gracePeriod=600 Jan 20 04:13:40 crc kubenswrapper[4898]: I0120 04:13:40.795792 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="bad17c7f4ed87fcd8059992345ba30f1186ca3dfbfc22488f00e672024fef414" exitCode=0 Jan 20 04:13:40 crc kubenswrapper[4898]: I0120 04:13:40.795972 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"bad17c7f4ed87fcd8059992345ba30f1186ca3dfbfc22488f00e672024fef414"} Jan 20 04:13:40 crc kubenswrapper[4898]: I0120 04:13:40.796302 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89"} Jan 20 04:13:40 crc kubenswrapper[4898]: I0120 04:13:40.796336 4898 scope.go:117] "RemoveContainer" containerID="1b7a7c76c50f5c70766a45b97b50c613bd67cf8335b24e271a6d0ca6195cc7af" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.158244 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl"] Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.160716 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.164691 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.170197 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.202570 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl"] Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.311761 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svszq\" (UniqueName: \"kubernetes.io/projected/f241279c-d727-47c7-9cb8-3adf038b09d3-kube-api-access-svszq\") pod \"collect-profiles-29481375-qk2jl\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.311867 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f241279c-d727-47c7-9cb8-3adf038b09d3-config-volume\") pod \"collect-profiles-29481375-qk2jl\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.311975 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f241279c-d727-47c7-9cb8-3adf038b09d3-secret-volume\") pod \"collect-profiles-29481375-qk2jl\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.414466 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svszq\" (UniqueName: \"kubernetes.io/projected/f241279c-d727-47c7-9cb8-3adf038b09d3-kube-api-access-svszq\") pod \"collect-profiles-29481375-qk2jl\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.414555 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f241279c-d727-47c7-9cb8-3adf038b09d3-config-volume\") pod \"collect-profiles-29481375-qk2jl\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.414637 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f241279c-d727-47c7-9cb8-3adf038b09d3-secret-volume\") pod \"collect-profiles-29481375-qk2jl\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.415753 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f241279c-d727-47c7-9cb8-3adf038b09d3-config-volume\") pod \"collect-profiles-29481375-qk2jl\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.421366 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f241279c-d727-47c7-9cb8-3adf038b09d3-secret-volume\") pod \"collect-profiles-29481375-qk2jl\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.430280 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svszq\" (UniqueName: \"kubernetes.io/projected/f241279c-d727-47c7-9cb8-3adf038b09d3-kube-api-access-svszq\") pod \"collect-profiles-29481375-qk2jl\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.493839 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:00 crc kubenswrapper[4898]: I0120 04:15:00.946793 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl"] Jan 20 04:15:01 crc kubenswrapper[4898]: I0120 04:15:01.645710 4898 generic.go:334] "Generic (PLEG): container finished" podID="f241279c-d727-47c7-9cb8-3adf038b09d3" containerID="b37d51c7f71246b0ce8aeb7ae1395d0b2045f11d2428e300f4e7d77fc6cf7c8a" exitCode=0 Jan 20 04:15:01 crc kubenswrapper[4898]: I0120 04:15:01.645777 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" event={"ID":"f241279c-d727-47c7-9cb8-3adf038b09d3","Type":"ContainerDied","Data":"b37d51c7f71246b0ce8aeb7ae1395d0b2045f11d2428e300f4e7d77fc6cf7c8a"} Jan 20 04:15:01 crc kubenswrapper[4898]: I0120 04:15:01.646063 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" event={"ID":"f241279c-d727-47c7-9cb8-3adf038b09d3","Type":"ContainerStarted","Data":"fc809e896fad8e3af54ee5d28e61a161a3db86840c1be8daab4d5d840c9587e8"} Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.025719 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.171812 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svszq\" (UniqueName: \"kubernetes.io/projected/f241279c-d727-47c7-9cb8-3adf038b09d3-kube-api-access-svszq\") pod \"f241279c-d727-47c7-9cb8-3adf038b09d3\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.171965 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f241279c-d727-47c7-9cb8-3adf038b09d3-config-volume\") pod \"f241279c-d727-47c7-9cb8-3adf038b09d3\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.172011 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f241279c-d727-47c7-9cb8-3adf038b09d3-secret-volume\") pod \"f241279c-d727-47c7-9cb8-3adf038b09d3\" (UID: \"f241279c-d727-47c7-9cb8-3adf038b09d3\") " Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.172826 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f241279c-d727-47c7-9cb8-3adf038b09d3-config-volume" (OuterVolumeSpecName: "config-volume") pod "f241279c-d727-47c7-9cb8-3adf038b09d3" (UID: "f241279c-d727-47c7-9cb8-3adf038b09d3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.177324 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f241279c-d727-47c7-9cb8-3adf038b09d3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f241279c-d727-47c7-9cb8-3adf038b09d3" (UID: "f241279c-d727-47c7-9cb8-3adf038b09d3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.180109 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f241279c-d727-47c7-9cb8-3adf038b09d3-kube-api-access-svszq" (OuterVolumeSpecName: "kube-api-access-svszq") pod "f241279c-d727-47c7-9cb8-3adf038b09d3" (UID: "f241279c-d727-47c7-9cb8-3adf038b09d3"). InnerVolumeSpecName "kube-api-access-svszq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.274785 4898 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f241279c-d727-47c7-9cb8-3adf038b09d3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.274839 4898 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f241279c-d727-47c7-9cb8-3adf038b09d3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.274859 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svszq\" (UniqueName: \"kubernetes.io/projected/f241279c-d727-47c7-9cb8-3adf038b09d3-kube-api-access-svszq\") on node \"crc\" DevicePath \"\"" Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.673922 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" event={"ID":"f241279c-d727-47c7-9cb8-3adf038b09d3","Type":"ContainerDied","Data":"fc809e896fad8e3af54ee5d28e61a161a3db86840c1be8daab4d5d840c9587e8"} Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.673960 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc809e896fad8e3af54ee5d28e61a161a3db86840c1be8daab4d5d840c9587e8" Jan 20 04:15:03 crc kubenswrapper[4898]: I0120 04:15:03.674025 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl" Jan 20 04:15:16 crc kubenswrapper[4898]: I0120 04:15:16.271994 4898 scope.go:117] "RemoveContainer" containerID="5a2ed03b4d2609760f3955725dac6c87a66ccc6574183d749f6489f5cd5dbe20" Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.039037 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85a1-account-create-update-8wh6v"] Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.046905 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a8cf-account-create-update-5t4l9"] Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.055698 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7g6jm"] Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.063734 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zbqjf"] Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.070797 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85a1-account-create-update-8wh6v"] Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.082673 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zbqjf"] Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.089321 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7g6jm"] Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.095658 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a8cf-account-create-update-5t4l9"] Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.737604 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33bf9790-4fcb-4959-b8b3-2f77741968c7" path="/var/lib/kubelet/pods/33bf9790-4fcb-4959-b8b3-2f77741968c7/volumes" Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.738344 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="569429a7-1cab-4f29-9a5f-5430c3364d56" path="/var/lib/kubelet/pods/569429a7-1cab-4f29-9a5f-5430c3364d56/volumes" Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.742306 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc07dbb-703a-49b5-be97-6162c5fba9e0" path="/var/lib/kubelet/pods/bcc07dbb-703a-49b5-be97-6162c5fba9e0/volumes" Jan 20 04:15:23 crc kubenswrapper[4898]: I0120 04:15:23.742899 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f059d6d8-faf3-4f28-977d-c8786a790906" path="/var/lib/kubelet/pods/f059d6d8-faf3-4f28-977d-c8786a790906/volumes" Jan 20 04:15:25 crc kubenswrapper[4898]: I0120 04:15:25.030487 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dkf2b"] Jan 20 04:15:25 crc kubenswrapper[4898]: I0120 04:15:25.038387 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d625-account-create-update-dkfhm"] Jan 20 04:15:25 crc kubenswrapper[4898]: I0120 04:15:25.046604 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dkf2b"] Jan 20 04:15:25 crc kubenswrapper[4898]: I0120 04:15:25.053959 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d625-account-create-update-dkfhm"] Jan 20 04:15:25 crc kubenswrapper[4898]: I0120 04:15:25.756031 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c413de1-fc94-4f5e-b697-fb6f94d99d46" path="/var/lib/kubelet/pods/0c413de1-fc94-4f5e-b697-fb6f94d99d46/volumes" Jan 20 04:15:25 crc kubenswrapper[4898]: I0120 04:15:25.757243 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="659ed26d-0996-42cf-9288-f9c6567f61a8" path="/var/lib/kubelet/pods/659ed26d-0996-42cf-9288-f9c6567f61a8/volumes" Jan 20 04:15:54 crc kubenswrapper[4898]: I0120 04:15:54.042140 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-srv84"] Jan 20 04:15:54 crc kubenswrapper[4898]: I0120 04:15:54.050204 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-srv84"] Jan 20 04:15:55 crc kubenswrapper[4898]: I0120 04:15:55.730972 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="befd057f-4068-49ce-8679-33b0b01fabfc" path="/var/lib/kubelet/pods/befd057f-4068-49ce-8679-33b0b01fabfc/volumes" Jan 20 04:16:01 crc kubenswrapper[4898]: I0120 04:16:01.028931 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-5wm2t"] Jan 20 04:16:01 crc kubenswrapper[4898]: I0120 04:16:01.037910 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-5wm2t"] Jan 20 04:16:01 crc kubenswrapper[4898]: I0120 04:16:01.730559 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89e4d258-a008-4a05-ae27-1d5c03654aa2" path="/var/lib/kubelet/pods/89e4d258-a008-4a05-ae27-1d5c03654aa2/volumes" Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.042269 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-67f4-account-create-update-xwdx5"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.050003 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-67f4-account-create-update-xwdx5"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.059311 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ng4j5"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.066552 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d158-account-create-update-nfbsk"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.075310 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-206e-account-create-update-jlllb"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.084494 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ng4j5"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.092000 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bwv6k"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.103785 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d158-account-create-update-nfbsk"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.106614 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-206e-account-create-update-jlllb"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.113815 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bwv6k"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.121400 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d299-account-create-update-tm2k8"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.129021 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d299-account-create-update-tm2k8"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.136958 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-nrzcw"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.144591 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kcrtx"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.152244 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kcrtx"] Jan 20 04:16:04 crc kubenswrapper[4898]: I0120 04:16:04.159969 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-nrzcw"] Jan 20 04:16:05 crc kubenswrapper[4898]: I0120 04:16:05.731559 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a8a45e8-91bc-40b8-9a92-b8f82709a03a" path="/var/lib/kubelet/pods/2a8a45e8-91bc-40b8-9a92-b8f82709a03a/volumes" Jan 20 04:16:05 crc kubenswrapper[4898]: I0120 04:16:05.732625 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e099701-df46-48de-883e-65d209f81af0" path="/var/lib/kubelet/pods/2e099701-df46-48de-883e-65d209f81af0/volumes" Jan 20 04:16:05 crc kubenswrapper[4898]: I0120 04:16:05.733198 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c91d56-5dc8-4607-aad0-85214357b977" path="/var/lib/kubelet/pods/67c91d56-5dc8-4607-aad0-85214357b977/volumes" Jan 20 04:16:05 crc kubenswrapper[4898]: I0120 04:16:05.733725 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="710b53e5-5753-4b12-b02e-516fc4b2ed8f" path="/var/lib/kubelet/pods/710b53e5-5753-4b12-b02e-516fc4b2ed8f/volumes" Jan 20 04:16:05 crc kubenswrapper[4898]: I0120 04:16:05.734704 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b590651-4b56-4cf7-8374-c8fe0c8b26e5" path="/var/lib/kubelet/pods/8b590651-4b56-4cf7-8374-c8fe0c8b26e5/volumes" Jan 20 04:16:05 crc kubenswrapper[4898]: I0120 04:16:05.735197 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e3e17d9-6103-4600-8159-178bcefd2c84" path="/var/lib/kubelet/pods/8e3e17d9-6103-4600-8159-178bcefd2c84/volumes" Jan 20 04:16:05 crc kubenswrapper[4898]: I0120 04:16:05.735707 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14bc956-d554-4fef-be24-28d68be49afe" path="/var/lib/kubelet/pods/b14bc956-d554-4fef-be24-28d68be49afe/volumes" Jan 20 04:16:05 crc kubenswrapper[4898]: I0120 04:16:05.736640 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb0f751-8705-49a0-9d9a-67633e2f0379" path="/var/lib/kubelet/pods/edb0f751-8705-49a0-9d9a-67633e2f0379/volumes" Jan 20 04:16:08 crc kubenswrapper[4898]: I0120 04:16:08.036892 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-7gkfn"] Jan 20 04:16:08 crc kubenswrapper[4898]: I0120 04:16:08.057065 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-7gkfn"] Jan 20 04:16:09 crc kubenswrapper[4898]: I0120 04:16:09.732092 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10755ec2-23ba-4fea-852a-546e494f98df" path="/var/lib/kubelet/pods/10755ec2-23ba-4fea-852a-546e494f98df/volumes" Jan 20 04:16:09 crc kubenswrapper[4898]: I0120 04:16:09.975538 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:16:09 crc kubenswrapper[4898]: I0120 04:16:09.975618 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.336766 4898 scope.go:117] "RemoveContainer" containerID="dc767f467ad644b7c118469ebba23adc2d68db5c0aebe2cffff58b3db28b8683" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.379358 4898 scope.go:117] "RemoveContainer" containerID="6411c293fddb6f36132b72448ebaf3bd72ad0c7a618658296adc0f5d5be924dc" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.412699 4898 scope.go:117] "RemoveContainer" containerID="1f0f1bd77a5f23b4f28bbfdd8332ceb61a972afc1f5a34d9edaf5b7c8380dd2b" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.448776 4898 scope.go:117] "RemoveContainer" containerID="58cd567b840d4c90483f01188ff97e82e469ea8c7de68c7e12279fed88135fbf" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.490972 4898 scope.go:117] "RemoveContainer" containerID="abbd009e408fa9f3d13162797ae7e15894ddb91e37cd53f98b1c7dd138690d65" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.521011 4898 scope.go:117] "RemoveContainer" containerID="44ced44f63f45a8fba9ef29341a555df3f499a211ced250d0c5300b5d42298a4" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.557807 4898 scope.go:117] "RemoveContainer" containerID="d86152712962ffccbae21714f2c1a828063a081ddca3d31dec292ab5b6d94f07" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.574131 4898 scope.go:117] "RemoveContainer" containerID="2ca51ff5a7de11da0315f81ff2e6d99d146784adfef32640a1f3dcb9f2ad143e" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.598798 4898 scope.go:117] "RemoveContainer" containerID="1b3254c578de968d14f3a705b2f2532cab8b1d8099cd414e982661a631d3b83c" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.615538 4898 scope.go:117] "RemoveContainer" containerID="016bacf2a366a084523ddf98ed44b9d47203df2ca7040087ea4dd53244751c49" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.640899 4898 scope.go:117] "RemoveContainer" containerID="9377ed6020812d5ed6032abfe684d20e97912491cb5ed0d1642fd7888760b314" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.660100 4898 scope.go:117] "RemoveContainer" containerID="bf93cbc90dd158b1cdd9f5e226b81b0af0cac3edb57ed078d9d788cb53dc17ac" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.681681 4898 scope.go:117] "RemoveContainer" containerID="8f7586655ce567e812884d71d4f168718aef199339faefacfec57d292b198a0f" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.700777 4898 scope.go:117] "RemoveContainer" containerID="d887b73bbbbadd3085f1373f61577c1a9c4f54ebf9313d9ca00538a42491ec36" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.723030 4898 scope.go:117] "RemoveContainer" containerID="bbfac92806eb1c0ab61485d2fd6336ebf0bc06204df1d5770a601a828e23b6cf" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.758920 4898 scope.go:117] "RemoveContainer" containerID="9350047952c9a49fcf8e732ba0dc0ed170f1015a176064259a7eeb59aafe6d28" Jan 20 04:16:16 crc kubenswrapper[4898]: I0120 04:16:16.778898 4898 scope.go:117] "RemoveContainer" containerID="e7a64934f4de9c5b9893918259531267e5446e006e8ae984ed03d695c1bc3422" Jan 20 04:16:39 crc kubenswrapper[4898]: I0120 04:16:39.975577 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:16:39 crc kubenswrapper[4898]: I0120 04:16:39.976507 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:16:48 crc kubenswrapper[4898]: I0120 04:16:48.058025 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sppjk"] Jan 20 04:16:48 crc kubenswrapper[4898]: I0120 04:16:48.071993 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6n7z5"] Jan 20 04:16:48 crc kubenswrapper[4898]: I0120 04:16:48.081995 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-bwv29"] Jan 20 04:16:48 crc kubenswrapper[4898]: I0120 04:16:48.097853 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sppjk"] Jan 20 04:16:48 crc kubenswrapper[4898]: I0120 04:16:48.106783 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6n7z5"] Jan 20 04:16:48 crc kubenswrapper[4898]: I0120 04:16:48.114018 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-vvf7g"] Jan 20 04:16:48 crc kubenswrapper[4898]: I0120 04:16:48.121148 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-bwv29"] Jan 20 04:16:48 crc kubenswrapper[4898]: I0120 04:16:48.131181 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-vvf7g"] Jan 20 04:16:49 crc kubenswrapper[4898]: I0120 04:16:49.733913 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15b219ed-e32a-4f8c-b3f7-2282e6fddcb3" path="/var/lib/kubelet/pods/15b219ed-e32a-4f8c-b3f7-2282e6fddcb3/volumes" Jan 20 04:16:49 crc kubenswrapper[4898]: I0120 04:16:49.734685 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2333e11a-59ac-4a16-914a-e846f5fa04d7" path="/var/lib/kubelet/pods/2333e11a-59ac-4a16-914a-e846f5fa04d7/volumes" Jan 20 04:16:49 crc kubenswrapper[4898]: I0120 04:16:49.735522 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7306e81d-2494-41f7-84ba-e23d15cf73c5" path="/var/lib/kubelet/pods/7306e81d-2494-41f7-84ba-e23d15cf73c5/volumes" Jan 20 04:16:49 crc kubenswrapper[4898]: I0120 04:16:49.737419 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f911b103-17f6-4caf-86f3-56f70295a884" path="/var/lib/kubelet/pods/f911b103-17f6-4caf-86f3-56f70295a884/volumes" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.120095 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2mn8q"] Jan 20 04:16:50 crc kubenswrapper[4898]: E0120 04:16:50.121115 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f241279c-d727-47c7-9cb8-3adf038b09d3" containerName="collect-profiles" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.121148 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f241279c-d727-47c7-9cb8-3adf038b09d3" containerName="collect-profiles" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.121526 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f241279c-d727-47c7-9cb8-3adf038b09d3" containerName="collect-profiles" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.123856 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.139180 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2mn8q"] Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.174299 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-utilities\") pod \"community-operators-2mn8q\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.174526 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-catalog-content\") pod \"community-operators-2mn8q\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.174741 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm2zc\" (UniqueName: \"kubernetes.io/projected/9c6e9264-ae79-4c81-8683-84af8b86bfe4-kube-api-access-dm2zc\") pod \"community-operators-2mn8q\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.276931 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-catalog-content\") pod \"community-operators-2mn8q\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.277050 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm2zc\" (UniqueName: \"kubernetes.io/projected/9c6e9264-ae79-4c81-8683-84af8b86bfe4-kube-api-access-dm2zc\") pod \"community-operators-2mn8q\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.277079 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-utilities\") pod \"community-operators-2mn8q\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.277652 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-utilities\") pod \"community-operators-2mn8q\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.277873 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-catalog-content\") pod \"community-operators-2mn8q\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.304074 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm2zc\" (UniqueName: \"kubernetes.io/projected/9c6e9264-ae79-4c81-8683-84af8b86bfe4-kube-api-access-dm2zc\") pod \"community-operators-2mn8q\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:16:50 crc kubenswrapper[4898]: I0120 04:16:50.484784 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:16:51 crc kubenswrapper[4898]: I0120 04:16:51.064520 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2mn8q"] Jan 20 04:16:51 crc kubenswrapper[4898]: W0120 04:16:51.069775 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c6e9264_ae79_4c81_8683_84af8b86bfe4.slice/crio-60e68b7581a4f40cd0f80876753019936da25dcc1284a54e7a29f8fc24d8696b WatchSource:0}: Error finding container 60e68b7581a4f40cd0f80876753019936da25dcc1284a54e7a29f8fc24d8696b: Status 404 returned error can't find the container with id 60e68b7581a4f40cd0f80876753019936da25dcc1284a54e7a29f8fc24d8696b Jan 20 04:16:51 crc kubenswrapper[4898]: I0120 04:16:51.753019 4898 generic.go:334] "Generic (PLEG): container finished" podID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" containerID="4239945396ccd687d06443defb7db230dbb149d193473cbddfe81715d52490ed" exitCode=0 Jan 20 04:16:51 crc kubenswrapper[4898]: I0120 04:16:51.753104 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mn8q" event={"ID":"9c6e9264-ae79-4c81-8683-84af8b86bfe4","Type":"ContainerDied","Data":"4239945396ccd687d06443defb7db230dbb149d193473cbddfe81715d52490ed"} Jan 20 04:16:51 crc kubenswrapper[4898]: I0120 04:16:51.753491 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mn8q" event={"ID":"9c6e9264-ae79-4c81-8683-84af8b86bfe4","Type":"ContainerStarted","Data":"60e68b7581a4f40cd0f80876753019936da25dcc1284a54e7a29f8fc24d8696b"} Jan 20 04:16:51 crc kubenswrapper[4898]: I0120 04:16:51.757845 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 04:16:52 crc kubenswrapper[4898]: I0120 04:16:52.766039 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mn8q" event={"ID":"9c6e9264-ae79-4c81-8683-84af8b86bfe4","Type":"ContainerStarted","Data":"1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8"} Jan 20 04:16:53 crc kubenswrapper[4898]: I0120 04:16:53.778293 4898 generic.go:334] "Generic (PLEG): container finished" podID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" containerID="1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8" exitCode=0 Jan 20 04:16:53 crc kubenswrapper[4898]: I0120 04:16:53.778336 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mn8q" event={"ID":"9c6e9264-ae79-4c81-8683-84af8b86bfe4","Type":"ContainerDied","Data":"1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8"} Jan 20 04:16:54 crc kubenswrapper[4898]: I0120 04:16:54.787622 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mn8q" event={"ID":"9c6e9264-ae79-4c81-8683-84af8b86bfe4","Type":"ContainerStarted","Data":"2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc"} Jan 20 04:16:54 crc kubenswrapper[4898]: I0120 04:16:54.812140 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2mn8q" podStartSLOduration=2.306726772 podStartE2EDuration="4.812123472s" podCreationTimestamp="2026-01-20 04:16:50 +0000 UTC" firstStartedPulling="2026-01-20 04:16:51.757314782 +0000 UTC m=+1658.357102671" lastFinishedPulling="2026-01-20 04:16:54.262711512 +0000 UTC m=+1660.862499371" observedRunningTime="2026-01-20 04:16:54.805517919 +0000 UTC m=+1661.405305778" watchObservedRunningTime="2026-01-20 04:16:54.812123472 +0000 UTC m=+1661.411911321" Jan 20 04:17:00 crc kubenswrapper[4898]: I0120 04:17:00.485962 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:17:00 crc kubenswrapper[4898]: I0120 04:17:00.486987 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:17:00 crc kubenswrapper[4898]: I0120 04:17:00.552864 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:17:00 crc kubenswrapper[4898]: I0120 04:17:00.882417 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:17:00 crc kubenswrapper[4898]: I0120 04:17:00.945207 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2mn8q"] Jan 20 04:17:02 crc kubenswrapper[4898]: I0120 04:17:02.044232 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-rdd4p"] Jan 20 04:17:02 crc kubenswrapper[4898]: I0120 04:17:02.056390 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-rdd4p"] Jan 20 04:17:02 crc kubenswrapper[4898]: I0120 04:17:02.850808 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2mn8q" podUID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" containerName="registry-server" containerID="cri-o://2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc" gracePeriod=2 Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.031485 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9ws79"] Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.043069 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9ws79"] Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.746960 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e533f97-e194-486f-9125-b29cf19e6648" path="/var/lib/kubelet/pods/9e533f97-e194-486f-9125-b29cf19e6648/volumes" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.748066 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad38dd1a-677c-4db0-b349-684b1ca42820" path="/var/lib/kubelet/pods/ad38dd1a-677c-4db0-b349-684b1ca42820/volumes" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.788334 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.847336 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm2zc\" (UniqueName: \"kubernetes.io/projected/9c6e9264-ae79-4c81-8683-84af8b86bfe4-kube-api-access-dm2zc\") pod \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.847406 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-catalog-content\") pod \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.847497 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-utilities\") pod \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\" (UID: \"9c6e9264-ae79-4c81-8683-84af8b86bfe4\") " Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.848279 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-utilities" (OuterVolumeSpecName: "utilities") pod "9c6e9264-ae79-4c81-8683-84af8b86bfe4" (UID: "9c6e9264-ae79-4c81-8683-84af8b86bfe4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.849552 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.852212 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6e9264-ae79-4c81-8683-84af8b86bfe4-kube-api-access-dm2zc" (OuterVolumeSpecName: "kube-api-access-dm2zc") pod "9c6e9264-ae79-4c81-8683-84af8b86bfe4" (UID: "9c6e9264-ae79-4c81-8683-84af8b86bfe4"). InnerVolumeSpecName "kube-api-access-dm2zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.860856 4898 generic.go:334] "Generic (PLEG): container finished" podID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" containerID="2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc" exitCode=0 Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.860902 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mn8q" event={"ID":"9c6e9264-ae79-4c81-8683-84af8b86bfe4","Type":"ContainerDied","Data":"2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc"} Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.860944 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mn8q" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.860957 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mn8q" event={"ID":"9c6e9264-ae79-4c81-8683-84af8b86bfe4","Type":"ContainerDied","Data":"60e68b7581a4f40cd0f80876753019936da25dcc1284a54e7a29f8fc24d8696b"} Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.860982 4898 scope.go:117] "RemoveContainer" containerID="2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.902590 4898 scope.go:117] "RemoveContainer" containerID="1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.911513 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c6e9264-ae79-4c81-8683-84af8b86bfe4" (UID: "9c6e9264-ae79-4c81-8683-84af8b86bfe4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.933500 4898 scope.go:117] "RemoveContainer" containerID="4239945396ccd687d06443defb7db230dbb149d193473cbddfe81715d52490ed" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.951256 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm2zc\" (UniqueName: \"kubernetes.io/projected/9c6e9264-ae79-4c81-8683-84af8b86bfe4-kube-api-access-dm2zc\") on node \"crc\" DevicePath \"\"" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.951291 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6e9264-ae79-4c81-8683-84af8b86bfe4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.984771 4898 scope.go:117] "RemoveContainer" containerID="2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc" Jan 20 04:17:03 crc kubenswrapper[4898]: E0120 04:17:03.985318 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc\": container with ID starting with 2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc not found: ID does not exist" containerID="2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.985359 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc"} err="failed to get container status \"2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc\": rpc error: code = NotFound desc = could not find container \"2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc\": container with ID starting with 2bfaeb14daa422c85a3f1469279bfa23e2f6fb8d59513dec394d8ed84bd033dc not found: ID does not exist" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.985386 4898 scope.go:117] "RemoveContainer" containerID="1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8" Jan 20 04:17:03 crc kubenswrapper[4898]: E0120 04:17:03.985886 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8\": container with ID starting with 1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8 not found: ID does not exist" containerID="1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.985915 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8"} err="failed to get container status \"1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8\": rpc error: code = NotFound desc = could not find container \"1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8\": container with ID starting with 1670da79d48bbb96f44ac80109f91cb2500a98a23acfadba7c26803cd2a796f8 not found: ID does not exist" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.985933 4898 scope.go:117] "RemoveContainer" containerID="4239945396ccd687d06443defb7db230dbb149d193473cbddfe81715d52490ed" Jan 20 04:17:03 crc kubenswrapper[4898]: E0120 04:17:03.986241 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4239945396ccd687d06443defb7db230dbb149d193473cbddfe81715d52490ed\": container with ID starting with 4239945396ccd687d06443defb7db230dbb149d193473cbddfe81715d52490ed not found: ID does not exist" containerID="4239945396ccd687d06443defb7db230dbb149d193473cbddfe81715d52490ed" Jan 20 04:17:03 crc kubenswrapper[4898]: I0120 04:17:03.986278 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4239945396ccd687d06443defb7db230dbb149d193473cbddfe81715d52490ed"} err="failed to get container status \"4239945396ccd687d06443defb7db230dbb149d193473cbddfe81715d52490ed\": rpc error: code = NotFound desc = could not find container \"4239945396ccd687d06443defb7db230dbb149d193473cbddfe81715d52490ed\": container with ID starting with 4239945396ccd687d06443defb7db230dbb149d193473cbddfe81715d52490ed not found: ID does not exist" Jan 20 04:17:04 crc kubenswrapper[4898]: I0120 04:17:04.198772 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2mn8q"] Jan 20 04:17:04 crc kubenswrapper[4898]: I0120 04:17:04.206775 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2mn8q"] Jan 20 04:17:05 crc kubenswrapper[4898]: I0120 04:17:05.734992 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" path="/var/lib/kubelet/pods/9c6e9264-ae79-4c81-8683-84af8b86bfe4/volumes" Jan 20 04:17:09 crc kubenswrapper[4898]: I0120 04:17:09.976201 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:17:09 crc kubenswrapper[4898]: I0120 04:17:09.976883 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:17:09 crc kubenswrapper[4898]: I0120 04:17:09.976941 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 04:17:09 crc kubenswrapper[4898]: I0120 04:17:09.977860 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 04:17:09 crc kubenswrapper[4898]: I0120 04:17:09.977921 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" gracePeriod=600 Jan 20 04:17:10 crc kubenswrapper[4898]: E0120 04:17:10.112761 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:17:10 crc kubenswrapper[4898]: I0120 04:17:10.927301 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" exitCode=0 Jan 20 04:17:10 crc kubenswrapper[4898]: I0120 04:17:10.927402 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89"} Jan 20 04:17:10 crc kubenswrapper[4898]: I0120 04:17:10.927961 4898 scope.go:117] "RemoveContainer" containerID="bad17c7f4ed87fcd8059992345ba30f1186ca3dfbfc22488f00e672024fef414" Jan 20 04:17:10 crc kubenswrapper[4898]: I0120 04:17:10.928712 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:17:10 crc kubenswrapper[4898]: E0120 04:17:10.929023 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:17:17 crc kubenswrapper[4898]: I0120 04:17:17.050856 4898 scope.go:117] "RemoveContainer" containerID="7ccf5158b9f0b35d5866a559634a442e76aa1029820a156eb53f58996c661d60" Jan 20 04:17:17 crc kubenswrapper[4898]: I0120 04:17:17.101611 4898 scope.go:117] "RemoveContainer" containerID="6eb4a41f50ff2b3f47a448e0930f80008d0a2516f06967d5b06291085a3c7c4d" Jan 20 04:17:17 crc kubenswrapper[4898]: I0120 04:17:17.176124 4898 scope.go:117] "RemoveContainer" containerID="a8a9330ea287a8588b67dcfe381ddd7fbb8e2611ee7ef03ae0a67a1b78ae371c" Jan 20 04:17:17 crc kubenswrapper[4898]: I0120 04:17:17.231585 4898 scope.go:117] "RemoveContainer" containerID="93d609b1228b8176c7493225ee9142b68fd51bce54c9a0bdfb32c0492a32245c" Jan 20 04:17:17 crc kubenswrapper[4898]: I0120 04:17:17.273656 4898 scope.go:117] "RemoveContainer" containerID="bd0b80f057185241fc9b6a60514cadf528712a69597d8a45d6056e73dfbd2024" Jan 20 04:17:17 crc kubenswrapper[4898]: I0120 04:17:17.329345 4898 scope.go:117] "RemoveContainer" containerID="63507c8ac0e818979463b5d432f5b3ea8fd400fc403ca69d8c4c1086bb1e59ed" Jan 20 04:17:26 crc kubenswrapper[4898]: I0120 04:17:26.721085 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:17:26 crc kubenswrapper[4898]: E0120 04:17:26.721953 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:17:38 crc kubenswrapper[4898]: I0120 04:17:38.721239 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:17:38 crc kubenswrapper[4898]: E0120 04:17:38.722121 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:17:48 crc kubenswrapper[4898]: I0120 04:17:48.038689 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2815-account-create-update-qr72w"] Jan 20 04:17:48 crc kubenswrapper[4898]: I0120 04:17:48.046490 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-tkcqt"] Jan 20 04:17:48 crc kubenswrapper[4898]: I0120 04:17:48.053918 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-cnqlc"] Jan 20 04:17:48 crc kubenswrapper[4898]: I0120 04:17:48.062156 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2815-account-create-update-qr72w"] Jan 20 04:17:48 crc kubenswrapper[4898]: I0120 04:17:48.069588 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-tkcqt"] Jan 20 04:17:48 crc kubenswrapper[4898]: I0120 04:17:48.079348 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-cnqlc"] Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.029966 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7jz5l"] Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.036392 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7e33-account-create-update-bkwm9"] Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.042876 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-881c-account-create-update-tcrgw"] Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.049591 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7jz5l"] Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.058136 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7e33-account-create-update-bkwm9"] Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.065249 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-881c-account-create-update-tcrgw"] Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.739905 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cf8572-b400-41c1-ab44-089877cca867" path="/var/lib/kubelet/pods/08cf8572-b400-41c1-ab44-089877cca867/volumes" Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.741137 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f9b463-34e5-4ed0-b31e-0662ea4c09a8" path="/var/lib/kubelet/pods/14f9b463-34e5-4ed0-b31e-0662ea4c09a8/volumes" Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.742575 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d8e86b-0ab5-4724-8213-bd7258c4a124" path="/var/lib/kubelet/pods/91d8e86b-0ab5-4724-8213-bd7258c4a124/volumes" Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.743638 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ff3424-465b-4cd1-a1b2-f30dfcb68e27" path="/var/lib/kubelet/pods/94ff3424-465b-4cd1-a1b2-f30dfcb68e27/volumes" Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.745553 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee661f59-12b9-4598-afcc-c20dac6c2694" path="/var/lib/kubelet/pods/ee661f59-12b9-4598-afcc-c20dac6c2694/volumes" Jan 20 04:17:49 crc kubenswrapper[4898]: I0120 04:17:49.746982 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8" path="/var/lib/kubelet/pods/fa6be5c3-c13c-4d1c-8a0a-589cdc5c25a8/volumes" Jan 20 04:17:51 crc kubenswrapper[4898]: I0120 04:17:51.721938 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:17:51 crc kubenswrapper[4898]: E0120 04:17:51.722955 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:18:06 crc kubenswrapper[4898]: I0120 04:18:06.721113 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:18:06 crc kubenswrapper[4898]: E0120 04:18:06.721850 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.010783 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4dc4d"] Jan 20 04:18:07 crc kubenswrapper[4898]: E0120 04:18:07.012777 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" containerName="extract-utilities" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.012824 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" containerName="extract-utilities" Jan 20 04:18:07 crc kubenswrapper[4898]: E0120 04:18:07.012859 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" containerName="extract-content" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.012880 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" containerName="extract-content" Jan 20 04:18:07 crc kubenswrapper[4898]: E0120 04:18:07.012965 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" containerName="registry-server" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.012983 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" containerName="registry-server" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.016178 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6e9264-ae79-4c81-8683-84af8b86bfe4" containerName="registry-server" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.018999 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.026848 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4dc4d"] Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.172499 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf282\" (UniqueName: \"kubernetes.io/projected/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-kube-api-access-vf282\") pod \"certified-operators-4dc4d\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.172622 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-catalog-content\") pod \"certified-operators-4dc4d\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.172696 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-utilities\") pod \"certified-operators-4dc4d\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.274572 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-utilities\") pod \"certified-operators-4dc4d\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.274670 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf282\" (UniqueName: \"kubernetes.io/projected/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-kube-api-access-vf282\") pod \"certified-operators-4dc4d\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.274740 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-catalog-content\") pod \"certified-operators-4dc4d\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.275215 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-catalog-content\") pod \"certified-operators-4dc4d\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.275464 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-utilities\") pod \"certified-operators-4dc4d\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.301537 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf282\" (UniqueName: \"kubernetes.io/projected/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-kube-api-access-vf282\") pod \"certified-operators-4dc4d\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.349391 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:07 crc kubenswrapper[4898]: I0120 04:18:07.660241 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4dc4d"] Jan 20 04:18:07 crc kubenswrapper[4898]: W0120 04:18:07.673744 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1ccf6db_6dfe_4c64_bd65_8e6dcda1269b.slice/crio-9542999e2ddf4dcfb2d1dd3adfd27b4811d939783af4b5933245002a98c1414b WatchSource:0}: Error finding container 9542999e2ddf4dcfb2d1dd3adfd27b4811d939783af4b5933245002a98c1414b: Status 404 returned error can't find the container with id 9542999e2ddf4dcfb2d1dd3adfd27b4811d939783af4b5933245002a98c1414b Jan 20 04:18:08 crc kubenswrapper[4898]: I0120 04:18:08.478051 4898 generic.go:334] "Generic (PLEG): container finished" podID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" containerID="96d42dbad3cd834919bf3c93a7885f1cbe98d7ec91e1cfb0217a94d5f10e8697" exitCode=0 Jan 20 04:18:08 crc kubenswrapper[4898]: I0120 04:18:08.478097 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dc4d" event={"ID":"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b","Type":"ContainerDied","Data":"96d42dbad3cd834919bf3c93a7885f1cbe98d7ec91e1cfb0217a94d5f10e8697"} Jan 20 04:18:08 crc kubenswrapper[4898]: I0120 04:18:08.478331 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dc4d" event={"ID":"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b","Type":"ContainerStarted","Data":"9542999e2ddf4dcfb2d1dd3adfd27b4811d939783af4b5933245002a98c1414b"} Jan 20 04:18:09 crc kubenswrapper[4898]: I0120 04:18:09.495767 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dc4d" event={"ID":"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b","Type":"ContainerStarted","Data":"60f642f1432df9b9d5713debacac6e8a11c72b306ab43408156a0f3a6e504e7a"} Jan 20 04:18:10 crc kubenswrapper[4898]: I0120 04:18:10.507525 4898 generic.go:334] "Generic (PLEG): container finished" podID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" containerID="60f642f1432df9b9d5713debacac6e8a11c72b306ab43408156a0f3a6e504e7a" exitCode=0 Jan 20 04:18:10 crc kubenswrapper[4898]: I0120 04:18:10.507657 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dc4d" event={"ID":"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b","Type":"ContainerDied","Data":"60f642f1432df9b9d5713debacac6e8a11c72b306ab43408156a0f3a6e504e7a"} Jan 20 04:18:11 crc kubenswrapper[4898]: I0120 04:18:11.532015 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dc4d" event={"ID":"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b","Type":"ContainerStarted","Data":"4ea2c047b6c6b39a86f0017542f757899f7567982016c7ff85f2d3806192ce43"} Jan 20 04:18:11 crc kubenswrapper[4898]: I0120 04:18:11.556348 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4dc4d" podStartSLOduration=3.020719701 podStartE2EDuration="5.556332919s" podCreationTimestamp="2026-01-20 04:18:06 +0000 UTC" firstStartedPulling="2026-01-20 04:18:08.479906416 +0000 UTC m=+1735.079694275" lastFinishedPulling="2026-01-20 04:18:11.015519634 +0000 UTC m=+1737.615307493" observedRunningTime="2026-01-20 04:18:11.55026655 +0000 UTC m=+1738.150054399" watchObservedRunningTime="2026-01-20 04:18:11.556332919 +0000 UTC m=+1738.156120778" Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.039299 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hk8gg"] Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.050611 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hk8gg"] Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.349769 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.349956 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.397016 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.464106 4898 scope.go:117] "RemoveContainer" containerID="c4e6056e3fc035c334d2a54ddb4f1b2c1f3dcdd4e1dbf191af733f442ba6e716" Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.500701 4898 scope.go:117] "RemoveContainer" containerID="8cb5ccf65ed9dc59227727ee8464ee422e8e6e86922ae12c69478da83700cf02" Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.535196 4898 scope.go:117] "RemoveContainer" containerID="1f5c599b1399a14317a7907f26476ed9e6086b8439892ac16c02341cc18cc39b" Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.571149 4898 scope.go:117] "RemoveContainer" containerID="ba3ff2156ec12a6e474e2c779fa2876ed2dc2592699760e0353699fa84fc25d6" Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.614098 4898 scope.go:117] "RemoveContainer" containerID="2fb6613803f2da2019b370243a7c8cf9cc4cd8e21700eec2fc51ff3e9b294af3" Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.646226 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.658247 4898 scope.go:117] "RemoveContainer" containerID="50177f10a0f1802a193266c9cadd5b928de9102eca5edfef8959a0d37e83c456" Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.688105 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4dc4d"] Jan 20 04:18:17 crc kubenswrapper[4898]: I0120 04:18:17.732086 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bf119bb-2bd0-418f-9b31-df14044054db" path="/var/lib/kubelet/pods/7bf119bb-2bd0-418f-9b31-df14044054db/volumes" Jan 20 04:18:19 crc kubenswrapper[4898]: I0120 04:18:19.610844 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4dc4d" podUID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" containerName="registry-server" containerID="cri-o://4ea2c047b6c6b39a86f0017542f757899f7567982016c7ff85f2d3806192ce43" gracePeriod=2 Jan 20 04:18:20 crc kubenswrapper[4898]: I0120 04:18:20.620521 4898 generic.go:334] "Generic (PLEG): container finished" podID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" containerID="4ea2c047b6c6b39a86f0017542f757899f7567982016c7ff85f2d3806192ce43" exitCode=0 Jan 20 04:18:20 crc kubenswrapper[4898]: I0120 04:18:20.620614 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dc4d" event={"ID":"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b","Type":"ContainerDied","Data":"4ea2c047b6c6b39a86f0017542f757899f7567982016c7ff85f2d3806192ce43"} Jan 20 04:18:20 crc kubenswrapper[4898]: I0120 04:18:20.929215 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.078520 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-utilities\") pod \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.078651 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-catalog-content\") pod \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.078711 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf282\" (UniqueName: \"kubernetes.io/projected/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-kube-api-access-vf282\") pod \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\" (UID: \"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b\") " Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.079422 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-utilities" (OuterVolumeSpecName: "utilities") pod "a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" (UID: "a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.088723 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-kube-api-access-vf282" (OuterVolumeSpecName: "kube-api-access-vf282") pod "a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" (UID: "a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b"). InnerVolumeSpecName "kube-api-access-vf282". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.124167 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" (UID: "a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.180553 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf282\" (UniqueName: \"kubernetes.io/projected/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-kube-api-access-vf282\") on node \"crc\" DevicePath \"\"" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.180590 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.180604 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.632501 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dc4d" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.632415 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dc4d" event={"ID":"a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b","Type":"ContainerDied","Data":"9542999e2ddf4dcfb2d1dd3adfd27b4811d939783af4b5933245002a98c1414b"} Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.632641 4898 scope.go:117] "RemoveContainer" containerID="4ea2c047b6c6b39a86f0017542f757899f7567982016c7ff85f2d3806192ce43" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.654625 4898 scope.go:117] "RemoveContainer" containerID="60f642f1432df9b9d5713debacac6e8a11c72b306ab43408156a0f3a6e504e7a" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.666259 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4dc4d"] Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.673779 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4dc4d"] Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.686847 4898 scope.go:117] "RemoveContainer" containerID="96d42dbad3cd834919bf3c93a7885f1cbe98d7ec91e1cfb0217a94d5f10e8697" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.721178 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:18:21 crc kubenswrapper[4898]: E0120 04:18:21.721542 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:18:21 crc kubenswrapper[4898]: I0120 04:18:21.729577 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" path="/var/lib/kubelet/pods/a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b/volumes" Jan 20 04:18:36 crc kubenswrapper[4898]: I0120 04:18:36.721524 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:18:36 crc kubenswrapper[4898]: E0120 04:18:36.723067 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:18:40 crc kubenswrapper[4898]: I0120 04:18:40.062363 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4bpb9"] Jan 20 04:18:40 crc kubenswrapper[4898]: I0120 04:18:40.074496 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ppbg2"] Jan 20 04:18:40 crc kubenswrapper[4898]: I0120 04:18:40.085485 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4bpb9"] Jan 20 04:18:40 crc kubenswrapper[4898]: I0120 04:18:40.106579 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ppbg2"] Jan 20 04:18:41 crc kubenswrapper[4898]: I0120 04:18:41.734940 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb" path="/var/lib/kubelet/pods/39b50dbf-2fbb-4fb2-bff6-4ef2b0272ecb/volumes" Jan 20 04:18:41 crc kubenswrapper[4898]: I0120 04:18:41.736112 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853037a4-153d-47a7-bc24-69e16c937e41" path="/var/lib/kubelet/pods/853037a4-153d-47a7-bc24-69e16c937e41/volumes" Jan 20 04:18:47 crc kubenswrapper[4898]: I0120 04:18:47.721354 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:18:47 crc kubenswrapper[4898]: E0120 04:18:47.722577 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:19:00 crc kubenswrapper[4898]: I0120 04:19:00.721122 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:19:00 crc kubenswrapper[4898]: E0120 04:19:00.721826 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:19:12 crc kubenswrapper[4898]: I0120 04:19:12.722304 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:19:12 crc kubenswrapper[4898]: E0120 04:19:12.723627 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:19:17 crc kubenswrapper[4898]: I0120 04:19:17.799971 4898 scope.go:117] "RemoveContainer" containerID="071a66f52ab08b214495046f98666855f818e4a699c1d6b8ee7793840d70f9ff" Jan 20 04:19:17 crc kubenswrapper[4898]: I0120 04:19:17.859598 4898 scope.go:117] "RemoveContainer" containerID="628dedf9c3f0ab41144b38c7b6eebb8800326b22fcbe8ea61284a2c9718104d8" Jan 20 04:19:17 crc kubenswrapper[4898]: I0120 04:19:17.899261 4898 scope.go:117] "RemoveContainer" containerID="7e5df16546237698f944cd41eb647f1590345be5ce7e64a2b97a0dcdc547a879" Jan 20 04:19:24 crc kubenswrapper[4898]: I0120 04:19:24.083584 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-2s9t2"] Jan 20 04:19:24 crc kubenswrapper[4898]: I0120 04:19:24.097989 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-2s9t2"] Jan 20 04:19:25 crc kubenswrapper[4898]: I0120 04:19:25.736983 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff" path="/var/lib/kubelet/pods/2d1e7f1d-d5ec-4e51-bf57-d1083fb39dff/volumes" Jan 20 04:19:27 crc kubenswrapper[4898]: I0120 04:19:27.721139 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:19:27 crc kubenswrapper[4898]: E0120 04:19:27.721760 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:19:41 crc kubenswrapper[4898]: I0120 04:19:41.721810 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:19:41 crc kubenswrapper[4898]: E0120 04:19:41.722578 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:19:56 crc kubenswrapper[4898]: I0120 04:19:56.721691 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:19:56 crc kubenswrapper[4898]: E0120 04:19:56.722275 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:20:10 crc kubenswrapper[4898]: I0120 04:20:10.720841 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:20:10 crc kubenswrapper[4898]: E0120 04:20:10.721493 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:20:18 crc kubenswrapper[4898]: I0120 04:20:18.013764 4898 scope.go:117] "RemoveContainer" containerID="d837675c0c7561d80a191261b45b9dd86f0a334435bf491fae2e39e942fa3b6c" Jan 20 04:20:25 crc kubenswrapper[4898]: I0120 04:20:25.721499 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:20:25 crc kubenswrapper[4898]: E0120 04:20:25.722109 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:20:37 crc kubenswrapper[4898]: I0120 04:20:37.722170 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:20:37 crc kubenswrapper[4898]: E0120 04:20:37.723168 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:20:50 crc kubenswrapper[4898]: I0120 04:20:50.721255 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:20:50 crc kubenswrapper[4898]: E0120 04:20:50.722063 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:21:01 crc kubenswrapper[4898]: I0120 04:21:01.721911 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:21:01 crc kubenswrapper[4898]: E0120 04:21:01.722835 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:21:14 crc kubenswrapper[4898]: I0120 04:21:14.721996 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:21:14 crc kubenswrapper[4898]: E0120 04:21:14.723288 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:21:25 crc kubenswrapper[4898]: I0120 04:21:25.721787 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:21:25 crc kubenswrapper[4898]: E0120 04:21:25.722683 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:21:40 crc kubenswrapper[4898]: I0120 04:21:40.721380 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:21:40 crc kubenswrapper[4898]: E0120 04:21:40.722340 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.325821 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dnwqf"] Jan 20 04:21:49 crc kubenswrapper[4898]: E0120 04:21:49.326940 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" containerName="registry-server" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.326973 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" containerName="registry-server" Jan 20 04:21:49 crc kubenswrapper[4898]: E0120 04:21:49.327000 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" containerName="extract-content" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.327012 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" containerName="extract-content" Jan 20 04:21:49 crc kubenswrapper[4898]: E0120 04:21:49.327037 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" containerName="extract-utilities" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.327074 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" containerName="extract-utilities" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.327305 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ccf6db-6dfe-4c64-bd65-8e6dcda1269b" containerName="registry-server" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.329091 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.353732 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnwqf"] Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.431365 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-utilities\") pod \"redhat-operators-dnwqf\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.431473 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7phcw\" (UniqueName: \"kubernetes.io/projected/f3c042bc-aff9-4dea-82a1-9969a28d59d7-kube-api-access-7phcw\") pod \"redhat-operators-dnwqf\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.431531 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-catalog-content\") pod \"redhat-operators-dnwqf\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.534465 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-utilities\") pod \"redhat-operators-dnwqf\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.534550 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7phcw\" (UniqueName: \"kubernetes.io/projected/f3c042bc-aff9-4dea-82a1-9969a28d59d7-kube-api-access-7phcw\") pod \"redhat-operators-dnwqf\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.534600 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-catalog-content\") pod \"redhat-operators-dnwqf\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.535196 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-utilities\") pod \"redhat-operators-dnwqf\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.535337 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-catalog-content\") pod \"redhat-operators-dnwqf\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.561538 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7phcw\" (UniqueName: \"kubernetes.io/projected/f3c042bc-aff9-4dea-82a1-9969a28d59d7-kube-api-access-7phcw\") pod \"redhat-operators-dnwqf\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:49 crc kubenswrapper[4898]: I0120 04:21:49.657988 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:50 crc kubenswrapper[4898]: I0120 04:21:50.112216 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnwqf"] Jan 20 04:21:50 crc kubenswrapper[4898]: I0120 04:21:50.650302 4898 generic.go:334] "Generic (PLEG): container finished" podID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerID="41c5e4ce5fc2a311ad448fca48a603cf7a83de979fda038994097c4fe663cd33" exitCode=0 Jan 20 04:21:50 crc kubenswrapper[4898]: I0120 04:21:50.650345 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwqf" event={"ID":"f3c042bc-aff9-4dea-82a1-9969a28d59d7","Type":"ContainerDied","Data":"41c5e4ce5fc2a311ad448fca48a603cf7a83de979fda038994097c4fe663cd33"} Jan 20 04:21:50 crc kubenswrapper[4898]: I0120 04:21:50.650371 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwqf" event={"ID":"f3c042bc-aff9-4dea-82a1-9969a28d59d7","Type":"ContainerStarted","Data":"b64bea9680976981506e10b7af3b9bdfcbf2f5e60eb2dacad09758630c0fbf9b"} Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.669244 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwqf" event={"ID":"f3c042bc-aff9-4dea-82a1-9969a28d59d7","Type":"ContainerStarted","Data":"af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01"} Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.721201 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:21:51 crc kubenswrapper[4898]: E0120 04:21:51.721956 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.732548 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m8hc2"] Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.734912 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.738765 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8hc2"] Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.772859 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-utilities\") pod \"redhat-marketplace-m8hc2\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.772898 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxzhk\" (UniqueName: \"kubernetes.io/projected/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-kube-api-access-rxzhk\") pod \"redhat-marketplace-m8hc2\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.773028 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-catalog-content\") pod \"redhat-marketplace-m8hc2\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.874886 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-utilities\") pod \"redhat-marketplace-m8hc2\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.874937 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxzhk\" (UniqueName: \"kubernetes.io/projected/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-kube-api-access-rxzhk\") pod \"redhat-marketplace-m8hc2\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.875046 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-catalog-content\") pod \"redhat-marketplace-m8hc2\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.875424 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-utilities\") pod \"redhat-marketplace-m8hc2\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.875546 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-catalog-content\") pod \"redhat-marketplace-m8hc2\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:21:51 crc kubenswrapper[4898]: I0120 04:21:51.899160 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxzhk\" (UniqueName: \"kubernetes.io/projected/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-kube-api-access-rxzhk\") pod \"redhat-marketplace-m8hc2\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:21:52 crc kubenswrapper[4898]: I0120 04:21:52.049842 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:21:52 crc kubenswrapper[4898]: I0120 04:21:52.536252 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8hc2"] Jan 20 04:21:52 crc kubenswrapper[4898]: W0120 04:21:52.542462 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2d5f4ae_3d94_4d99_9e9d_6049ab2d5244.slice/crio-a860f00e9c02a86faaae618ad1bbf735a925caeb0c363b2739f7cf4ab4ecfa34 WatchSource:0}: Error finding container a860f00e9c02a86faaae618ad1bbf735a925caeb0c363b2739f7cf4ab4ecfa34: Status 404 returned error can't find the container with id a860f00e9c02a86faaae618ad1bbf735a925caeb0c363b2739f7cf4ab4ecfa34 Jan 20 04:21:52 crc kubenswrapper[4898]: I0120 04:21:52.680639 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8hc2" event={"ID":"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244","Type":"ContainerStarted","Data":"a860f00e9c02a86faaae618ad1bbf735a925caeb0c363b2739f7cf4ab4ecfa34"} Jan 20 04:21:53 crc kubenswrapper[4898]: I0120 04:21:53.691723 4898 generic.go:334] "Generic (PLEG): container finished" podID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerID="af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01" exitCode=0 Jan 20 04:21:53 crc kubenswrapper[4898]: I0120 04:21:53.691812 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwqf" event={"ID":"f3c042bc-aff9-4dea-82a1-9969a28d59d7","Type":"ContainerDied","Data":"af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01"} Jan 20 04:21:53 crc kubenswrapper[4898]: I0120 04:21:53.693715 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8hc2" event={"ID":"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244","Type":"ContainerStarted","Data":"4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0"} Jan 20 04:21:53 crc kubenswrapper[4898]: I0120 04:21:53.696093 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 04:21:54 crc kubenswrapper[4898]: I0120 04:21:54.710116 4898 generic.go:334] "Generic (PLEG): container finished" podID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" containerID="4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0" exitCode=0 Jan 20 04:21:54 crc kubenswrapper[4898]: I0120 04:21:54.710169 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8hc2" event={"ID":"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244","Type":"ContainerDied","Data":"4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0"} Jan 20 04:21:55 crc kubenswrapper[4898]: I0120 04:21:55.733915 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwqf" event={"ID":"f3c042bc-aff9-4dea-82a1-9969a28d59d7","Type":"ContainerStarted","Data":"450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87"} Jan 20 04:21:55 crc kubenswrapper[4898]: I0120 04:21:55.756570 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dnwqf" podStartSLOduration=2.638293833 podStartE2EDuration="6.756553226s" podCreationTimestamp="2026-01-20 04:21:49 +0000 UTC" firstStartedPulling="2026-01-20 04:21:50.652036416 +0000 UTC m=+1957.251824275" lastFinishedPulling="2026-01-20 04:21:54.770295799 +0000 UTC m=+1961.370083668" observedRunningTime="2026-01-20 04:21:55.750518366 +0000 UTC m=+1962.350306215" watchObservedRunningTime="2026-01-20 04:21:55.756553226 +0000 UTC m=+1962.356341085" Jan 20 04:21:56 crc kubenswrapper[4898]: I0120 04:21:56.737511 4898 generic.go:334] "Generic (PLEG): container finished" podID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" containerID="f03b3109288211c07a3604b2004972f4ea0af8c742a5e0ea6a170c345b8cbd34" exitCode=0 Jan 20 04:21:56 crc kubenswrapper[4898]: I0120 04:21:56.737573 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8hc2" event={"ID":"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244","Type":"ContainerDied","Data":"f03b3109288211c07a3604b2004972f4ea0af8c742a5e0ea6a170c345b8cbd34"} Jan 20 04:21:57 crc kubenswrapper[4898]: I0120 04:21:57.746132 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8hc2" event={"ID":"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244","Type":"ContainerStarted","Data":"b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6"} Jan 20 04:21:57 crc kubenswrapper[4898]: I0120 04:21:57.767422 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m8hc2" podStartSLOduration=4.372054165 podStartE2EDuration="6.76740527s" podCreationTimestamp="2026-01-20 04:21:51 +0000 UTC" firstStartedPulling="2026-01-20 04:21:54.712993925 +0000 UTC m=+1961.312781824" lastFinishedPulling="2026-01-20 04:21:57.10834507 +0000 UTC m=+1963.708132929" observedRunningTime="2026-01-20 04:21:57.763665193 +0000 UTC m=+1964.363453082" watchObservedRunningTime="2026-01-20 04:21:57.76740527 +0000 UTC m=+1964.367193129" Jan 20 04:21:59 crc kubenswrapper[4898]: I0120 04:21:59.658841 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:21:59 crc kubenswrapper[4898]: I0120 04:21:59.658923 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:22:00 crc kubenswrapper[4898]: I0120 04:22:00.716182 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dnwqf" podUID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerName="registry-server" probeResult="failure" output=< Jan 20 04:22:00 crc kubenswrapper[4898]: timeout: failed to connect service ":50051" within 1s Jan 20 04:22:00 crc kubenswrapper[4898]: > Jan 20 04:22:02 crc kubenswrapper[4898]: I0120 04:22:02.050525 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:22:02 crc kubenswrapper[4898]: I0120 04:22:02.050922 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:22:02 crc kubenswrapper[4898]: I0120 04:22:02.138201 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:22:02 crc kubenswrapper[4898]: I0120 04:22:02.842101 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:22:02 crc kubenswrapper[4898]: I0120 04:22:02.909367 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8hc2"] Jan 20 04:22:04 crc kubenswrapper[4898]: I0120 04:22:04.721668 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:22:04 crc kubenswrapper[4898]: E0120 04:22:04.722295 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:22:04 crc kubenswrapper[4898]: I0120 04:22:04.807094 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m8hc2" podUID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" containerName="registry-server" containerID="cri-o://b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6" gracePeriod=2 Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.348230 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.504031 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-utilities\") pod \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.504145 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-catalog-content\") pod \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.504319 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxzhk\" (UniqueName: \"kubernetes.io/projected/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-kube-api-access-rxzhk\") pod \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\" (UID: \"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244\") " Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.504946 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-utilities" (OuterVolumeSpecName: "utilities") pod "b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" (UID: "b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.510733 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-kube-api-access-rxzhk" (OuterVolumeSpecName: "kube-api-access-rxzhk") pod "b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" (UID: "b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244"). InnerVolumeSpecName "kube-api-access-rxzhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.531808 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" (UID: "b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.606337 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxzhk\" (UniqueName: \"kubernetes.io/projected/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-kube-api-access-rxzhk\") on node \"crc\" DevicePath \"\"" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.606650 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.606665 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.818607 4898 generic.go:334] "Generic (PLEG): container finished" podID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" containerID="b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6" exitCode=0 Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.818669 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8hc2" event={"ID":"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244","Type":"ContainerDied","Data":"b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6"} Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.818723 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8hc2" event={"ID":"b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244","Type":"ContainerDied","Data":"a860f00e9c02a86faaae618ad1bbf735a925caeb0c363b2739f7cf4ab4ecfa34"} Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.818755 4898 scope.go:117] "RemoveContainer" containerID="b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.819533 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8hc2" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.860867 4898 scope.go:117] "RemoveContainer" containerID="f03b3109288211c07a3604b2004972f4ea0af8c742a5e0ea6a170c345b8cbd34" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.871308 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8hc2"] Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.880085 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8hc2"] Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.891660 4898 scope.go:117] "RemoveContainer" containerID="4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.938574 4898 scope.go:117] "RemoveContainer" containerID="b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6" Jan 20 04:22:05 crc kubenswrapper[4898]: E0120 04:22:05.939470 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6\": container with ID starting with b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6 not found: ID does not exist" containerID="b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.939528 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6"} err="failed to get container status \"b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6\": rpc error: code = NotFound desc = could not find container \"b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6\": container with ID starting with b34e52bf45a578b8602c31c25f4055ac5c82960434080ad70c3b1cd905bbb0f6 not found: ID does not exist" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.939566 4898 scope.go:117] "RemoveContainer" containerID="f03b3109288211c07a3604b2004972f4ea0af8c742a5e0ea6a170c345b8cbd34" Jan 20 04:22:05 crc kubenswrapper[4898]: E0120 04:22:05.939989 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f03b3109288211c07a3604b2004972f4ea0af8c742a5e0ea6a170c345b8cbd34\": container with ID starting with f03b3109288211c07a3604b2004972f4ea0af8c742a5e0ea6a170c345b8cbd34 not found: ID does not exist" containerID="f03b3109288211c07a3604b2004972f4ea0af8c742a5e0ea6a170c345b8cbd34" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.940059 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f03b3109288211c07a3604b2004972f4ea0af8c742a5e0ea6a170c345b8cbd34"} err="failed to get container status \"f03b3109288211c07a3604b2004972f4ea0af8c742a5e0ea6a170c345b8cbd34\": rpc error: code = NotFound desc = could not find container \"f03b3109288211c07a3604b2004972f4ea0af8c742a5e0ea6a170c345b8cbd34\": container with ID starting with f03b3109288211c07a3604b2004972f4ea0af8c742a5e0ea6a170c345b8cbd34 not found: ID does not exist" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.940091 4898 scope.go:117] "RemoveContainer" containerID="4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0" Jan 20 04:22:05 crc kubenswrapper[4898]: E0120 04:22:05.940622 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0\": container with ID starting with 4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0 not found: ID does not exist" containerID="4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0" Jan 20 04:22:05 crc kubenswrapper[4898]: I0120 04:22:05.940676 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0"} err="failed to get container status \"4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0\": rpc error: code = NotFound desc = could not find container \"4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0\": container with ID starting with 4411debbd038eb09701d1d483b79cb9a55393e748731d857b50ef07f60262bb0 not found: ID does not exist" Jan 20 04:22:07 crc kubenswrapper[4898]: I0120 04:22:07.740114 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" path="/var/lib/kubelet/pods/b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244/volumes" Jan 20 04:22:09 crc kubenswrapper[4898]: I0120 04:22:09.740892 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:22:09 crc kubenswrapper[4898]: I0120 04:22:09.809897 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:22:10 crc kubenswrapper[4898]: I0120 04:22:10.013630 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnwqf"] Jan 20 04:22:10 crc kubenswrapper[4898]: I0120 04:22:10.870400 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dnwqf" podUID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerName="registry-server" containerID="cri-o://450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87" gracePeriod=2 Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.423971 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.533325 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7phcw\" (UniqueName: \"kubernetes.io/projected/f3c042bc-aff9-4dea-82a1-9969a28d59d7-kube-api-access-7phcw\") pod \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.533542 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-catalog-content\") pod \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.533684 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-utilities\") pod \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\" (UID: \"f3c042bc-aff9-4dea-82a1-9969a28d59d7\") " Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.534471 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-utilities" (OuterVolumeSpecName: "utilities") pod "f3c042bc-aff9-4dea-82a1-9969a28d59d7" (UID: "f3c042bc-aff9-4dea-82a1-9969a28d59d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.539028 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c042bc-aff9-4dea-82a1-9969a28d59d7-kube-api-access-7phcw" (OuterVolumeSpecName: "kube-api-access-7phcw") pod "f3c042bc-aff9-4dea-82a1-9969a28d59d7" (UID: "f3c042bc-aff9-4dea-82a1-9969a28d59d7"). InnerVolumeSpecName "kube-api-access-7phcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.636509 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7phcw\" (UniqueName: \"kubernetes.io/projected/f3c042bc-aff9-4dea-82a1-9969a28d59d7-kube-api-access-7phcw\") on node \"crc\" DevicePath \"\"" Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.636536 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.644163 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3c042bc-aff9-4dea-82a1-9969a28d59d7" (UID: "f3c042bc-aff9-4dea-82a1-9969a28d59d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.738526 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c042bc-aff9-4dea-82a1-9969a28d59d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.884239 4898 generic.go:334] "Generic (PLEG): container finished" podID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerID="450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87" exitCode=0 Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.884289 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwqf" event={"ID":"f3c042bc-aff9-4dea-82a1-9969a28d59d7","Type":"ContainerDied","Data":"450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87"} Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.884319 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwqf" event={"ID":"f3c042bc-aff9-4dea-82a1-9969a28d59d7","Type":"ContainerDied","Data":"b64bea9680976981506e10b7af3b9bdfcbf2f5e60eb2dacad09758630c0fbf9b"} Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.884341 4898 scope.go:117] "RemoveContainer" containerID="450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87" Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.884346 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnwqf" Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.916283 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnwqf"] Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.917375 4898 scope.go:117] "RemoveContainer" containerID="af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01" Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.928856 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dnwqf"] Jan 20 04:22:11 crc kubenswrapper[4898]: I0120 04:22:11.940191 4898 scope.go:117] "RemoveContainer" containerID="41c5e4ce5fc2a311ad448fca48a603cf7a83de979fda038994097c4fe663cd33" Jan 20 04:22:12 crc kubenswrapper[4898]: I0120 04:22:12.008360 4898 scope.go:117] "RemoveContainer" containerID="450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87" Jan 20 04:22:12 crc kubenswrapper[4898]: E0120 04:22:12.009067 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87\": container with ID starting with 450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87 not found: ID does not exist" containerID="450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87" Jan 20 04:22:12 crc kubenswrapper[4898]: I0120 04:22:12.009204 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87"} err="failed to get container status \"450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87\": rpc error: code = NotFound desc = could not find container \"450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87\": container with ID starting with 450d94baf8c70cd1e862baf984fabc7942d46f89e2a0102af865e42b96c3db87 not found: ID does not exist" Jan 20 04:22:12 crc kubenswrapper[4898]: I0120 04:22:12.009314 4898 scope.go:117] "RemoveContainer" containerID="af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01" Jan 20 04:22:12 crc kubenswrapper[4898]: E0120 04:22:12.010767 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01\": container with ID starting with af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01 not found: ID does not exist" containerID="af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01" Jan 20 04:22:12 crc kubenswrapper[4898]: I0120 04:22:12.010816 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01"} err="failed to get container status \"af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01\": rpc error: code = NotFound desc = could not find container \"af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01\": container with ID starting with af7b795b0aa586ae081b6c86f3b4f74bf4ac41a7aaf94dcb0709c97759e0fa01 not found: ID does not exist" Jan 20 04:22:12 crc kubenswrapper[4898]: I0120 04:22:12.010850 4898 scope.go:117] "RemoveContainer" containerID="41c5e4ce5fc2a311ad448fca48a603cf7a83de979fda038994097c4fe663cd33" Jan 20 04:22:12 crc kubenswrapper[4898]: E0120 04:22:12.015652 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c5e4ce5fc2a311ad448fca48a603cf7a83de979fda038994097c4fe663cd33\": container with ID starting with 41c5e4ce5fc2a311ad448fca48a603cf7a83de979fda038994097c4fe663cd33 not found: ID does not exist" containerID="41c5e4ce5fc2a311ad448fca48a603cf7a83de979fda038994097c4fe663cd33" Jan 20 04:22:12 crc kubenswrapper[4898]: I0120 04:22:12.015831 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c5e4ce5fc2a311ad448fca48a603cf7a83de979fda038994097c4fe663cd33"} err="failed to get container status \"41c5e4ce5fc2a311ad448fca48a603cf7a83de979fda038994097c4fe663cd33\": rpc error: code = NotFound desc = could not find container \"41c5e4ce5fc2a311ad448fca48a603cf7a83de979fda038994097c4fe663cd33\": container with ID starting with 41c5e4ce5fc2a311ad448fca48a603cf7a83de979fda038994097c4fe663cd33 not found: ID does not exist" Jan 20 04:22:13 crc kubenswrapper[4898]: I0120 04:22:13.740113 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" path="/var/lib/kubelet/pods/f3c042bc-aff9-4dea-82a1-9969a28d59d7/volumes" Jan 20 04:22:15 crc kubenswrapper[4898]: I0120 04:22:15.722579 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:22:16 crc kubenswrapper[4898]: I0120 04:22:16.951014 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"ffff4a904e1e002352f339faee7bbcfc1a1804ea38aff620708664051cc35bcb"} Jan 20 04:24:39 crc kubenswrapper[4898]: I0120 04:24:39.976340 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:24:39 crc kubenswrapper[4898]: I0120 04:24:39.977154 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:25:09 crc kubenswrapper[4898]: I0120 04:25:09.975521 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:25:09 crc kubenswrapper[4898]: I0120 04:25:09.976060 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:25:39 crc kubenswrapper[4898]: I0120 04:25:39.976492 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:25:39 crc kubenswrapper[4898]: I0120 04:25:39.977091 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:25:39 crc kubenswrapper[4898]: I0120 04:25:39.977154 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 04:25:39 crc kubenswrapper[4898]: I0120 04:25:39.978075 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffff4a904e1e002352f339faee7bbcfc1a1804ea38aff620708664051cc35bcb"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 04:25:39 crc kubenswrapper[4898]: I0120 04:25:39.978187 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://ffff4a904e1e002352f339faee7bbcfc1a1804ea38aff620708664051cc35bcb" gracePeriod=600 Jan 20 04:25:40 crc kubenswrapper[4898]: I0120 04:25:40.892223 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="ffff4a904e1e002352f339faee7bbcfc1a1804ea38aff620708664051cc35bcb" exitCode=0 Jan 20 04:25:40 crc kubenswrapper[4898]: I0120 04:25:40.892300 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"ffff4a904e1e002352f339faee7bbcfc1a1804ea38aff620708664051cc35bcb"} Jan 20 04:25:40 crc kubenswrapper[4898]: I0120 04:25:40.893254 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84"} Jan 20 04:25:40 crc kubenswrapper[4898]: I0120 04:25:40.893297 4898 scope.go:117] "RemoveContainer" containerID="53177de85f2922a408501dd1619a3bc65509d224b52dd805a5b300cad6909e89" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.602957 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tcmk9"] Jan 20 04:27:07 crc kubenswrapper[4898]: E0120 04:27:07.604663 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" containerName="extract-utilities" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.604698 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" containerName="extract-utilities" Jan 20 04:27:07 crc kubenswrapper[4898]: E0120 04:27:07.604729 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerName="registry-server" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.604748 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerName="registry-server" Jan 20 04:27:07 crc kubenswrapper[4898]: E0120 04:27:07.604788 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" containerName="extract-content" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.604806 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" containerName="extract-content" Jan 20 04:27:07 crc kubenswrapper[4898]: E0120 04:27:07.604854 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" containerName="registry-server" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.604871 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" containerName="registry-server" Jan 20 04:27:07 crc kubenswrapper[4898]: E0120 04:27:07.604906 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerName="extract-content" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.604923 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerName="extract-content" Jan 20 04:27:07 crc kubenswrapper[4898]: E0120 04:27:07.604950 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerName="extract-utilities" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.604969 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerName="extract-utilities" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.605395 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c042bc-aff9-4dea-82a1-9969a28d59d7" containerName="registry-server" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.605487 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d5f4ae-3d94-4d99-9e9d-6049ab2d5244" containerName="registry-server" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.608485 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.615940 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tcmk9"] Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.685984 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hdvf\" (UniqueName: \"kubernetes.io/projected/0bcc5802-1154-4600-8e24-e29a38757e37-kube-api-access-4hdvf\") pod \"community-operators-tcmk9\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.686059 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-catalog-content\") pod \"community-operators-tcmk9\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.686355 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-utilities\") pod \"community-operators-tcmk9\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.788584 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-utilities\") pod \"community-operators-tcmk9\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.788745 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hdvf\" (UniqueName: \"kubernetes.io/projected/0bcc5802-1154-4600-8e24-e29a38757e37-kube-api-access-4hdvf\") pod \"community-operators-tcmk9\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.788779 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-catalog-content\") pod \"community-operators-tcmk9\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.789247 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-utilities\") pod \"community-operators-tcmk9\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.789261 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-catalog-content\") pod \"community-operators-tcmk9\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.816696 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hdvf\" (UniqueName: \"kubernetes.io/projected/0bcc5802-1154-4600-8e24-e29a38757e37-kube-api-access-4hdvf\") pod \"community-operators-tcmk9\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:07 crc kubenswrapper[4898]: I0120 04:27:07.956661 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:09 crc kubenswrapper[4898]: I0120 04:27:09.043958 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tcmk9"] Jan 20 04:27:09 crc kubenswrapper[4898]: W0120 04:27:09.060119 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bcc5802_1154_4600_8e24_e29a38757e37.slice/crio-23782024582f8dbee6846fd38faa6a3f9540c6b76cb006ba567d4d186d95c205 WatchSource:0}: Error finding container 23782024582f8dbee6846fd38faa6a3f9540c6b76cb006ba567d4d186d95c205: Status 404 returned error can't find the container with id 23782024582f8dbee6846fd38faa6a3f9540c6b76cb006ba567d4d186d95c205 Jan 20 04:27:09 crc kubenswrapper[4898]: I0120 04:27:09.741970 4898 generic.go:334] "Generic (PLEG): container finished" podID="0bcc5802-1154-4600-8e24-e29a38757e37" containerID="c87c43f775340865853fc865fb006103063ddb178bbdf7bf1ead0279454857b6" exitCode=0 Jan 20 04:27:09 crc kubenswrapper[4898]: I0120 04:27:09.742084 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmk9" event={"ID":"0bcc5802-1154-4600-8e24-e29a38757e37","Type":"ContainerDied","Data":"c87c43f775340865853fc865fb006103063ddb178bbdf7bf1ead0279454857b6"} Jan 20 04:27:09 crc kubenswrapper[4898]: I0120 04:27:09.742400 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmk9" event={"ID":"0bcc5802-1154-4600-8e24-e29a38757e37","Type":"ContainerStarted","Data":"23782024582f8dbee6846fd38faa6a3f9540c6b76cb006ba567d4d186d95c205"} Jan 20 04:27:09 crc kubenswrapper[4898]: I0120 04:27:09.746245 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 04:27:10 crc kubenswrapper[4898]: I0120 04:27:10.754932 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmk9" event={"ID":"0bcc5802-1154-4600-8e24-e29a38757e37","Type":"ContainerStarted","Data":"b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9"} Jan 20 04:27:11 crc kubenswrapper[4898]: I0120 04:27:11.768060 4898 generic.go:334] "Generic (PLEG): container finished" podID="0bcc5802-1154-4600-8e24-e29a38757e37" containerID="b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9" exitCode=0 Jan 20 04:27:11 crc kubenswrapper[4898]: I0120 04:27:11.768124 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmk9" event={"ID":"0bcc5802-1154-4600-8e24-e29a38757e37","Type":"ContainerDied","Data":"b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9"} Jan 20 04:27:12 crc kubenswrapper[4898]: I0120 04:27:12.778737 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmk9" event={"ID":"0bcc5802-1154-4600-8e24-e29a38757e37","Type":"ContainerStarted","Data":"feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb"} Jan 20 04:27:12 crc kubenswrapper[4898]: I0120 04:27:12.805748 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tcmk9" podStartSLOduration=3.269415066 podStartE2EDuration="5.805724698s" podCreationTimestamp="2026-01-20 04:27:07 +0000 UTC" firstStartedPulling="2026-01-20 04:27:09.744145154 +0000 UTC m=+2276.343933043" lastFinishedPulling="2026-01-20 04:27:12.280454816 +0000 UTC m=+2278.880242675" observedRunningTime="2026-01-20 04:27:12.798608754 +0000 UTC m=+2279.398396623" watchObservedRunningTime="2026-01-20 04:27:12.805724698 +0000 UTC m=+2279.405512557" Jan 20 04:27:17 crc kubenswrapper[4898]: I0120 04:27:17.956969 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:17 crc kubenswrapper[4898]: I0120 04:27:17.957527 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:18 crc kubenswrapper[4898]: I0120 04:27:18.008606 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:18 crc kubenswrapper[4898]: I0120 04:27:18.922356 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:18 crc kubenswrapper[4898]: I0120 04:27:18.983364 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tcmk9"] Jan 20 04:27:20 crc kubenswrapper[4898]: I0120 04:27:20.880846 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tcmk9" podUID="0bcc5802-1154-4600-8e24-e29a38757e37" containerName="registry-server" containerID="cri-o://feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb" gracePeriod=2 Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.426840 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.505057 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hdvf\" (UniqueName: \"kubernetes.io/projected/0bcc5802-1154-4600-8e24-e29a38757e37-kube-api-access-4hdvf\") pod \"0bcc5802-1154-4600-8e24-e29a38757e37\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.505121 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-catalog-content\") pod \"0bcc5802-1154-4600-8e24-e29a38757e37\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.505367 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-utilities\") pod \"0bcc5802-1154-4600-8e24-e29a38757e37\" (UID: \"0bcc5802-1154-4600-8e24-e29a38757e37\") " Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.507059 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-utilities" (OuterVolumeSpecName: "utilities") pod "0bcc5802-1154-4600-8e24-e29a38757e37" (UID: "0bcc5802-1154-4600-8e24-e29a38757e37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.517734 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bcc5802-1154-4600-8e24-e29a38757e37-kube-api-access-4hdvf" (OuterVolumeSpecName: "kube-api-access-4hdvf") pod "0bcc5802-1154-4600-8e24-e29a38757e37" (UID: "0bcc5802-1154-4600-8e24-e29a38757e37"). InnerVolumeSpecName "kube-api-access-4hdvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.554112 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bcc5802-1154-4600-8e24-e29a38757e37" (UID: "0bcc5802-1154-4600-8e24-e29a38757e37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.607365 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hdvf\" (UniqueName: \"kubernetes.io/projected/0bcc5802-1154-4600-8e24-e29a38757e37-kube-api-access-4hdvf\") on node \"crc\" DevicePath \"\"" Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.607397 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.607407 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcc5802-1154-4600-8e24-e29a38757e37-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.896741 4898 generic.go:334] "Generic (PLEG): container finished" podID="0bcc5802-1154-4600-8e24-e29a38757e37" containerID="feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb" exitCode=0 Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.896835 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmk9" event={"ID":"0bcc5802-1154-4600-8e24-e29a38757e37","Type":"ContainerDied","Data":"feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb"} Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.896882 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tcmk9" Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.897226 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tcmk9" event={"ID":"0bcc5802-1154-4600-8e24-e29a38757e37","Type":"ContainerDied","Data":"23782024582f8dbee6846fd38faa6a3f9540c6b76cb006ba567d4d186d95c205"} Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.897267 4898 scope.go:117] "RemoveContainer" containerID="feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb" Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.930798 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tcmk9"] Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.942516 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tcmk9"] Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.959633 4898 scope.go:117] "RemoveContainer" containerID="b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9" Jan 20 04:27:21 crc kubenswrapper[4898]: I0120 04:27:21.999416 4898 scope.go:117] "RemoveContainer" containerID="c87c43f775340865853fc865fb006103063ddb178bbdf7bf1ead0279454857b6" Jan 20 04:27:22 crc kubenswrapper[4898]: I0120 04:27:22.062458 4898 scope.go:117] "RemoveContainer" containerID="feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb" Jan 20 04:27:22 crc kubenswrapper[4898]: E0120 04:27:22.063259 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb\": container with ID starting with feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb not found: ID does not exist" containerID="feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb" Jan 20 04:27:22 crc kubenswrapper[4898]: I0120 04:27:22.063320 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb"} err="failed to get container status \"feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb\": rpc error: code = NotFound desc = could not find container \"feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb\": container with ID starting with feaaa21c615d75aacc8270aa461bcf5cd7f0f707bf6a0540db09874d556a6afb not found: ID does not exist" Jan 20 04:27:22 crc kubenswrapper[4898]: I0120 04:27:22.063363 4898 scope.go:117] "RemoveContainer" containerID="b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9" Jan 20 04:27:22 crc kubenswrapper[4898]: E0120 04:27:22.064013 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9\": container with ID starting with b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9 not found: ID does not exist" containerID="b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9" Jan 20 04:27:22 crc kubenswrapper[4898]: I0120 04:27:22.064062 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9"} err="failed to get container status \"b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9\": rpc error: code = NotFound desc = could not find container \"b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9\": container with ID starting with b4b2427926ce80bf6c1c2b3200ecd08051e1bf917bc2bd5029f3b327b89ae9f9 not found: ID does not exist" Jan 20 04:27:22 crc kubenswrapper[4898]: I0120 04:27:22.064135 4898 scope.go:117] "RemoveContainer" containerID="c87c43f775340865853fc865fb006103063ddb178bbdf7bf1ead0279454857b6" Jan 20 04:27:22 crc kubenswrapper[4898]: E0120 04:27:22.064634 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c87c43f775340865853fc865fb006103063ddb178bbdf7bf1ead0279454857b6\": container with ID starting with c87c43f775340865853fc865fb006103063ddb178bbdf7bf1ead0279454857b6 not found: ID does not exist" containerID="c87c43f775340865853fc865fb006103063ddb178bbdf7bf1ead0279454857b6" Jan 20 04:27:22 crc kubenswrapper[4898]: I0120 04:27:22.064671 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c87c43f775340865853fc865fb006103063ddb178bbdf7bf1ead0279454857b6"} err="failed to get container status \"c87c43f775340865853fc865fb006103063ddb178bbdf7bf1ead0279454857b6\": rpc error: code = NotFound desc = could not find container \"c87c43f775340865853fc865fb006103063ddb178bbdf7bf1ead0279454857b6\": container with ID starting with c87c43f775340865853fc865fb006103063ddb178bbdf7bf1ead0279454857b6 not found: ID does not exist" Jan 20 04:27:23 crc kubenswrapper[4898]: I0120 04:27:23.739151 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bcc5802-1154-4600-8e24-e29a38757e37" path="/var/lib/kubelet/pods/0bcc5802-1154-4600-8e24-e29a38757e37/volumes" Jan 20 04:28:09 crc kubenswrapper[4898]: I0120 04:28:09.975843 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:28:09 crc kubenswrapper[4898]: I0120 04:28:09.976645 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:28:39 crc kubenswrapper[4898]: I0120 04:28:39.976527 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:28:39 crc kubenswrapper[4898]: I0120 04:28:39.977402 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:29:09 crc kubenswrapper[4898]: I0120 04:29:09.975983 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:29:09 crc kubenswrapper[4898]: I0120 04:29:09.976839 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:29:09 crc kubenswrapper[4898]: I0120 04:29:09.976898 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 04:29:09 crc kubenswrapper[4898]: I0120 04:29:09.977911 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 04:29:09 crc kubenswrapper[4898]: I0120 04:29:09.977996 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" gracePeriod=600 Jan 20 04:29:10 crc kubenswrapper[4898]: E0120 04:29:10.112222 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:29:10 crc kubenswrapper[4898]: I0120 04:29:10.993206 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" exitCode=0 Jan 20 04:29:10 crc kubenswrapper[4898]: I0120 04:29:10.993260 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84"} Jan 20 04:29:10 crc kubenswrapper[4898]: I0120 04:29:10.993299 4898 scope.go:117] "RemoveContainer" containerID="ffff4a904e1e002352f339faee7bbcfc1a1804ea38aff620708664051cc35bcb" Jan 20 04:29:10 crc kubenswrapper[4898]: I0120 04:29:10.994258 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:29:10 crc kubenswrapper[4898]: E0120 04:29:10.994893 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:29:25 crc kubenswrapper[4898]: I0120 04:29:25.721132 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:29:25 crc kubenswrapper[4898]: E0120 04:29:25.722022 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:29:40 crc kubenswrapper[4898]: I0120 04:29:40.722201 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:29:40 crc kubenswrapper[4898]: E0120 04:29:40.723089 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:29:53 crc kubenswrapper[4898]: I0120 04:29:53.736544 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:29:53 crc kubenswrapper[4898]: E0120 04:29:53.738330 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.151849 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp"] Jan 20 04:30:00 crc kubenswrapper[4898]: E0120 04:30:00.152745 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcc5802-1154-4600-8e24-e29a38757e37" containerName="extract-content" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.152759 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcc5802-1154-4600-8e24-e29a38757e37" containerName="extract-content" Jan 20 04:30:00 crc kubenswrapper[4898]: E0120 04:30:00.152788 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcc5802-1154-4600-8e24-e29a38757e37" containerName="registry-server" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.152794 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcc5802-1154-4600-8e24-e29a38757e37" containerName="registry-server" Jan 20 04:30:00 crc kubenswrapper[4898]: E0120 04:30:00.152819 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcc5802-1154-4600-8e24-e29a38757e37" containerName="extract-utilities" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.152825 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcc5802-1154-4600-8e24-e29a38757e37" containerName="extract-utilities" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.152996 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bcc5802-1154-4600-8e24-e29a38757e37" containerName="registry-server" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.153707 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.159018 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp"] Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.161868 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.165799 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.230819 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-secret-volume\") pod \"collect-profiles-29481390-274pp\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.231039 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpmqm\" (UniqueName: \"kubernetes.io/projected/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-kube-api-access-fpmqm\") pod \"collect-profiles-29481390-274pp\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.231065 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-config-volume\") pod \"collect-profiles-29481390-274pp\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.332896 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpmqm\" (UniqueName: \"kubernetes.io/projected/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-kube-api-access-fpmqm\") pod \"collect-profiles-29481390-274pp\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.332945 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-config-volume\") pod \"collect-profiles-29481390-274pp\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.332988 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-secret-volume\") pod \"collect-profiles-29481390-274pp\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.334646 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-config-volume\") pod \"collect-profiles-29481390-274pp\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.340231 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-secret-volume\") pod \"collect-profiles-29481390-274pp\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.350619 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpmqm\" (UniqueName: \"kubernetes.io/projected/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-kube-api-access-fpmqm\") pod \"collect-profiles-29481390-274pp\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.481363 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:00 crc kubenswrapper[4898]: I0120 04:30:00.963993 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp"] Jan 20 04:30:01 crc kubenswrapper[4898]: I0120 04:30:01.460275 4898 generic.go:334] "Generic (PLEG): container finished" podID="8eb7a785-3b6e-40a9-ad1c-babff3bf1cef" containerID="68529a05f185e9c4737264bfb960294513d581c9dd9bf1147a3d80ccfe662438" exitCode=0 Jan 20 04:30:01 crc kubenswrapper[4898]: I0120 04:30:01.460508 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" event={"ID":"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef","Type":"ContainerDied","Data":"68529a05f185e9c4737264bfb960294513d581c9dd9bf1147a3d80ccfe662438"} Jan 20 04:30:01 crc kubenswrapper[4898]: I0120 04:30:01.460574 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" event={"ID":"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef","Type":"ContainerStarted","Data":"8d778bcec4b5aa55a36dd733de24974456d2f76feebc2beb54c492d61c065947"} Jan 20 04:30:02 crc kubenswrapper[4898]: I0120 04:30:02.817024 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:02 crc kubenswrapper[4898]: I0120 04:30:02.884695 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-secret-volume\") pod \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " Jan 20 04:30:02 crc kubenswrapper[4898]: I0120 04:30:02.884875 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpmqm\" (UniqueName: \"kubernetes.io/projected/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-kube-api-access-fpmqm\") pod \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " Jan 20 04:30:02 crc kubenswrapper[4898]: I0120 04:30:02.884933 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-config-volume\") pod \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\" (UID: \"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef\") " Jan 20 04:30:02 crc kubenswrapper[4898]: I0120 04:30:02.885756 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-config-volume" (OuterVolumeSpecName: "config-volume") pod "8eb7a785-3b6e-40a9-ad1c-babff3bf1cef" (UID: "8eb7a785-3b6e-40a9-ad1c-babff3bf1cef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:30:02 crc kubenswrapper[4898]: I0120 04:30:02.890318 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8eb7a785-3b6e-40a9-ad1c-babff3bf1cef" (UID: "8eb7a785-3b6e-40a9-ad1c-babff3bf1cef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:30:02 crc kubenswrapper[4898]: I0120 04:30:02.890875 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-kube-api-access-fpmqm" (OuterVolumeSpecName: "kube-api-access-fpmqm") pod "8eb7a785-3b6e-40a9-ad1c-babff3bf1cef" (UID: "8eb7a785-3b6e-40a9-ad1c-babff3bf1cef"). InnerVolumeSpecName "kube-api-access-fpmqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:30:02 crc kubenswrapper[4898]: I0120 04:30:02.986648 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpmqm\" (UniqueName: \"kubernetes.io/projected/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-kube-api-access-fpmqm\") on node \"crc\" DevicePath \"\"" Jan 20 04:30:02 crc kubenswrapper[4898]: I0120 04:30:02.986863 4898 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 04:30:02 crc kubenswrapper[4898]: I0120 04:30:02.986873 4898 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 04:30:03 crc kubenswrapper[4898]: I0120 04:30:03.485182 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" event={"ID":"8eb7a785-3b6e-40a9-ad1c-babff3bf1cef","Type":"ContainerDied","Data":"8d778bcec4b5aa55a36dd733de24974456d2f76feebc2beb54c492d61c065947"} Jan 20 04:30:03 crc kubenswrapper[4898]: I0120 04:30:03.485236 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d778bcec4b5aa55a36dd733de24974456d2f76feebc2beb54c492d61c065947" Jan 20 04:30:03 crc kubenswrapper[4898]: I0120 04:30:03.485284 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp" Jan 20 04:30:03 crc kubenswrapper[4898]: I0120 04:30:03.910339 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs"] Jan 20 04:30:03 crc kubenswrapper[4898]: I0120 04:30:03.917077 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481345-v9njs"] Jan 20 04:30:05 crc kubenswrapper[4898]: I0120 04:30:05.734537 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a73f73a2-1335-45a7-867b-18585f1c0862" path="/var/lib/kubelet/pods/a73f73a2-1335-45a7-867b-18585f1c0862/volumes" Jan 20 04:30:07 crc kubenswrapper[4898]: I0120 04:30:07.721544 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:30:07 crc kubenswrapper[4898]: E0120 04:30:07.722239 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:30:18 crc kubenswrapper[4898]: I0120 04:30:18.333078 4898 scope.go:117] "RemoveContainer" containerID="3a1a758f5bb760edb0adfaeba23f50e231bc7b6838d3b7f375abb81d9918bc88" Jan 20 04:30:19 crc kubenswrapper[4898]: I0120 04:30:19.744747 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:30:19 crc kubenswrapper[4898]: E0120 04:30:19.745472 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:30:33 crc kubenswrapper[4898]: I0120 04:30:33.721251 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:30:33 crc kubenswrapper[4898]: E0120 04:30:33.721726 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:30:47 crc kubenswrapper[4898]: I0120 04:30:47.721641 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:30:47 crc kubenswrapper[4898]: E0120 04:30:47.722594 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:31:00 crc kubenswrapper[4898]: I0120 04:31:00.721779 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:31:00 crc kubenswrapper[4898]: E0120 04:31:00.722906 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:31:12 crc kubenswrapper[4898]: I0120 04:31:12.721859 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:31:12 crc kubenswrapper[4898]: E0120 04:31:12.723110 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:31:26 crc kubenswrapper[4898]: I0120 04:31:26.721118 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:31:26 crc kubenswrapper[4898]: E0120 04:31:26.723342 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.356192 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sk2pl"] Jan 20 04:31:38 crc kubenswrapper[4898]: E0120 04:31:38.357969 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb7a785-3b6e-40a9-ad1c-babff3bf1cef" containerName="collect-profiles" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.358007 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb7a785-3b6e-40a9-ad1c-babff3bf1cef" containerName="collect-profiles" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.358475 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb7a785-3b6e-40a9-ad1c-babff3bf1cef" containerName="collect-profiles" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.361735 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.375741 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sk2pl"] Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.411581 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpr2x\" (UniqueName: \"kubernetes.io/projected/192b154a-aed3-4a87-b67f-646d86400202-kube-api-access-vpr2x\") pod \"certified-operators-sk2pl\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.412070 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-utilities\") pod \"certified-operators-sk2pl\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.412204 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-catalog-content\") pod \"certified-operators-sk2pl\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.514073 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpr2x\" (UniqueName: \"kubernetes.io/projected/192b154a-aed3-4a87-b67f-646d86400202-kube-api-access-vpr2x\") pod \"certified-operators-sk2pl\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.514137 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-utilities\") pod \"certified-operators-sk2pl\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.514186 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-catalog-content\") pod \"certified-operators-sk2pl\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.514867 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-catalog-content\") pod \"certified-operators-sk2pl\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.515009 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-utilities\") pod \"certified-operators-sk2pl\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.539282 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpr2x\" (UniqueName: \"kubernetes.io/projected/192b154a-aed3-4a87-b67f-646d86400202-kube-api-access-vpr2x\") pod \"certified-operators-sk2pl\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.691132 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:38 crc kubenswrapper[4898]: I0120 04:31:38.724378 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:31:38 crc kubenswrapper[4898]: E0120 04:31:38.724710 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:31:39 crc kubenswrapper[4898]: I0120 04:31:39.120946 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sk2pl"] Jan 20 04:31:39 crc kubenswrapper[4898]: I0120 04:31:39.391058 4898 generic.go:334] "Generic (PLEG): container finished" podID="192b154a-aed3-4a87-b67f-646d86400202" containerID="f18f1ed1e0213755c3714df16f23f8c6f51157d6e0ce20699d3bb956f681f59c" exitCode=0 Jan 20 04:31:39 crc kubenswrapper[4898]: I0120 04:31:39.391165 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk2pl" event={"ID":"192b154a-aed3-4a87-b67f-646d86400202","Type":"ContainerDied","Data":"f18f1ed1e0213755c3714df16f23f8c6f51157d6e0ce20699d3bb956f681f59c"} Jan 20 04:31:39 crc kubenswrapper[4898]: I0120 04:31:39.391336 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk2pl" event={"ID":"192b154a-aed3-4a87-b67f-646d86400202","Type":"ContainerStarted","Data":"edd69798c709f20c33ef3d19770e7dd94cc469becdea91b01eb6e1eb171835ba"} Jan 20 04:31:41 crc kubenswrapper[4898]: I0120 04:31:41.428883 4898 generic.go:334] "Generic (PLEG): container finished" podID="192b154a-aed3-4a87-b67f-646d86400202" containerID="2c79dc608a734cfec2a60c66c1f63067da61afb12f059b7951d56f425abd8545" exitCode=0 Jan 20 04:31:41 crc kubenswrapper[4898]: I0120 04:31:41.429312 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk2pl" event={"ID":"192b154a-aed3-4a87-b67f-646d86400202","Type":"ContainerDied","Data":"2c79dc608a734cfec2a60c66c1f63067da61afb12f059b7951d56f425abd8545"} Jan 20 04:31:42 crc kubenswrapper[4898]: I0120 04:31:42.440425 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk2pl" event={"ID":"192b154a-aed3-4a87-b67f-646d86400202","Type":"ContainerStarted","Data":"c25ee2b0f0e59d7e37bdc6a5f998d9974d14131ad0add2731958599ca6d45076"} Jan 20 04:31:42 crc kubenswrapper[4898]: I0120 04:31:42.478321 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sk2pl" podStartSLOduration=1.919249335 podStartE2EDuration="4.478304236s" podCreationTimestamp="2026-01-20 04:31:38 +0000 UTC" firstStartedPulling="2026-01-20 04:31:39.393075659 +0000 UTC m=+2545.992863518" lastFinishedPulling="2026-01-20 04:31:41.95213057 +0000 UTC m=+2548.551918419" observedRunningTime="2026-01-20 04:31:42.471012709 +0000 UTC m=+2549.070800608" watchObservedRunningTime="2026-01-20 04:31:42.478304236 +0000 UTC m=+2549.078092095" Jan 20 04:31:48 crc kubenswrapper[4898]: I0120 04:31:48.691903 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:48 crc kubenswrapper[4898]: I0120 04:31:48.692511 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:48 crc kubenswrapper[4898]: I0120 04:31:48.757418 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:49 crc kubenswrapper[4898]: I0120 04:31:49.574163 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:49 crc kubenswrapper[4898]: I0120 04:31:49.641230 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sk2pl"] Jan 20 04:31:51 crc kubenswrapper[4898]: I0120 04:31:51.518184 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sk2pl" podUID="192b154a-aed3-4a87-b67f-646d86400202" containerName="registry-server" containerID="cri-o://c25ee2b0f0e59d7e37bdc6a5f998d9974d14131ad0add2731958599ca6d45076" gracePeriod=2 Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.527247 4898 generic.go:334] "Generic (PLEG): container finished" podID="192b154a-aed3-4a87-b67f-646d86400202" containerID="c25ee2b0f0e59d7e37bdc6a5f998d9974d14131ad0add2731958599ca6d45076" exitCode=0 Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.527324 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk2pl" event={"ID":"192b154a-aed3-4a87-b67f-646d86400202","Type":"ContainerDied","Data":"c25ee2b0f0e59d7e37bdc6a5f998d9974d14131ad0add2731958599ca6d45076"} Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.527539 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk2pl" event={"ID":"192b154a-aed3-4a87-b67f-646d86400202","Type":"ContainerDied","Data":"edd69798c709f20c33ef3d19770e7dd94cc469becdea91b01eb6e1eb171835ba"} Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.527560 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edd69798c709f20c33ef3d19770e7dd94cc469becdea91b01eb6e1eb171835ba" Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.536747 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.713578 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-catalog-content\") pod \"192b154a-aed3-4a87-b67f-646d86400202\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.713647 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-utilities\") pod \"192b154a-aed3-4a87-b67f-646d86400202\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.713774 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpr2x\" (UniqueName: \"kubernetes.io/projected/192b154a-aed3-4a87-b67f-646d86400202-kube-api-access-vpr2x\") pod \"192b154a-aed3-4a87-b67f-646d86400202\" (UID: \"192b154a-aed3-4a87-b67f-646d86400202\") " Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.715180 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-utilities" (OuterVolumeSpecName: "utilities") pod "192b154a-aed3-4a87-b67f-646d86400202" (UID: "192b154a-aed3-4a87-b67f-646d86400202"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.719934 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192b154a-aed3-4a87-b67f-646d86400202-kube-api-access-vpr2x" (OuterVolumeSpecName: "kube-api-access-vpr2x") pod "192b154a-aed3-4a87-b67f-646d86400202" (UID: "192b154a-aed3-4a87-b67f-646d86400202"). InnerVolumeSpecName "kube-api-access-vpr2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.721635 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:31:52 crc kubenswrapper[4898]: E0120 04:31:52.721893 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.759291 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "192b154a-aed3-4a87-b67f-646d86400202" (UID: "192b154a-aed3-4a87-b67f-646d86400202"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.817360 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.817413 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192b154a-aed3-4a87-b67f-646d86400202-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:31:52 crc kubenswrapper[4898]: I0120 04:31:52.817459 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpr2x\" (UniqueName: \"kubernetes.io/projected/192b154a-aed3-4a87-b67f-646d86400202-kube-api-access-vpr2x\") on node \"crc\" DevicePath \"\"" Jan 20 04:31:53 crc kubenswrapper[4898]: I0120 04:31:53.539284 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk2pl" Jan 20 04:31:53 crc kubenswrapper[4898]: I0120 04:31:53.602757 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sk2pl"] Jan 20 04:31:53 crc kubenswrapper[4898]: I0120 04:31:53.612811 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sk2pl"] Jan 20 04:31:53 crc kubenswrapper[4898]: I0120 04:31:53.764924 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192b154a-aed3-4a87-b67f-646d86400202" path="/var/lib/kubelet/pods/192b154a-aed3-4a87-b67f-646d86400202/volumes" Jan 20 04:32:07 crc kubenswrapper[4898]: I0120 04:32:07.722122 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:32:07 crc kubenswrapper[4898]: E0120 04:32:07.723189 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:32:21 crc kubenswrapper[4898]: I0120 04:32:21.721869 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:32:21 crc kubenswrapper[4898]: E0120 04:32:21.723024 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:32:36 crc kubenswrapper[4898]: I0120 04:32:36.722345 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:32:36 crc kubenswrapper[4898]: E0120 04:32:36.723285 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:32:50 crc kubenswrapper[4898]: I0120 04:32:50.722639 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:32:50 crc kubenswrapper[4898]: E0120 04:32:50.723945 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:33:01 crc kubenswrapper[4898]: I0120 04:33:01.722997 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:33:01 crc kubenswrapper[4898]: E0120 04:33:01.724198 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:33:14 crc kubenswrapper[4898]: I0120 04:33:14.720763 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:33:14 crc kubenswrapper[4898]: E0120 04:33:14.721327 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.247020 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwgzg"] Jan 20 04:33:20 crc kubenswrapper[4898]: E0120 04:33:20.248251 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192b154a-aed3-4a87-b67f-646d86400202" containerName="extract-content" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.248273 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="192b154a-aed3-4a87-b67f-646d86400202" containerName="extract-content" Jan 20 04:33:20 crc kubenswrapper[4898]: E0120 04:33:20.248319 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192b154a-aed3-4a87-b67f-646d86400202" containerName="registry-server" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.248330 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="192b154a-aed3-4a87-b67f-646d86400202" containerName="registry-server" Jan 20 04:33:20 crc kubenswrapper[4898]: E0120 04:33:20.248354 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192b154a-aed3-4a87-b67f-646d86400202" containerName="extract-utilities" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.248365 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="192b154a-aed3-4a87-b67f-646d86400202" containerName="extract-utilities" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.248684 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="192b154a-aed3-4a87-b67f-646d86400202" containerName="registry-server" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.253496 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.270882 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwgzg"] Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.388233 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dnzp\" (UniqueName: \"kubernetes.io/projected/624afea3-a0de-4d79-a179-aac4c942e723-kube-api-access-5dnzp\") pod \"redhat-marketplace-zwgzg\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.388663 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-utilities\") pod \"redhat-marketplace-zwgzg\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.388717 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-catalog-content\") pod \"redhat-marketplace-zwgzg\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.490414 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-utilities\") pod \"redhat-marketplace-zwgzg\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.490521 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-catalog-content\") pod \"redhat-marketplace-zwgzg\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.490551 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dnzp\" (UniqueName: \"kubernetes.io/projected/624afea3-a0de-4d79-a179-aac4c942e723-kube-api-access-5dnzp\") pod \"redhat-marketplace-zwgzg\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.491355 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-utilities\") pod \"redhat-marketplace-zwgzg\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.491626 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-catalog-content\") pod \"redhat-marketplace-zwgzg\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.513193 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dnzp\" (UniqueName: \"kubernetes.io/projected/624afea3-a0de-4d79-a179-aac4c942e723-kube-api-access-5dnzp\") pod \"redhat-marketplace-zwgzg\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:20 crc kubenswrapper[4898]: I0120 04:33:20.625659 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:21 crc kubenswrapper[4898]: I0120 04:33:21.074475 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwgzg"] Jan 20 04:33:21 crc kubenswrapper[4898]: I0120 04:33:21.438261 4898 generic.go:334] "Generic (PLEG): container finished" podID="624afea3-a0de-4d79-a179-aac4c942e723" containerID="d34675054d939a23568c68c64d5eb1adb5a01419812de9a1273755e63d8a3cfe" exitCode=0 Jan 20 04:33:21 crc kubenswrapper[4898]: I0120 04:33:21.438323 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwgzg" event={"ID":"624afea3-a0de-4d79-a179-aac4c942e723","Type":"ContainerDied","Data":"d34675054d939a23568c68c64d5eb1adb5a01419812de9a1273755e63d8a3cfe"} Jan 20 04:33:21 crc kubenswrapper[4898]: I0120 04:33:21.438543 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwgzg" event={"ID":"624afea3-a0de-4d79-a179-aac4c942e723","Type":"ContainerStarted","Data":"7b5eab07ffee8828ae14517d4d89bbfe55ef327329276fb1a14144b92923fd6d"} Jan 20 04:33:21 crc kubenswrapper[4898]: I0120 04:33:21.441270 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 04:33:22 crc kubenswrapper[4898]: I0120 04:33:22.450047 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwgzg" event={"ID":"624afea3-a0de-4d79-a179-aac4c942e723","Type":"ContainerStarted","Data":"5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971"} Jan 20 04:33:23 crc kubenswrapper[4898]: I0120 04:33:23.463339 4898 generic.go:334] "Generic (PLEG): container finished" podID="624afea3-a0de-4d79-a179-aac4c942e723" containerID="5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971" exitCode=0 Jan 20 04:33:23 crc kubenswrapper[4898]: I0120 04:33:23.463481 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwgzg" event={"ID":"624afea3-a0de-4d79-a179-aac4c942e723","Type":"ContainerDied","Data":"5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971"} Jan 20 04:33:24 crc kubenswrapper[4898]: I0120 04:33:24.476295 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwgzg" event={"ID":"624afea3-a0de-4d79-a179-aac4c942e723","Type":"ContainerStarted","Data":"8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35"} Jan 20 04:33:24 crc kubenswrapper[4898]: I0120 04:33:24.500402 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwgzg" podStartSLOduration=1.9755428560000001 podStartE2EDuration="4.50038012s" podCreationTimestamp="2026-01-20 04:33:20 +0000 UTC" firstStartedPulling="2026-01-20 04:33:21.441012599 +0000 UTC m=+2648.040800468" lastFinishedPulling="2026-01-20 04:33:23.965849833 +0000 UTC m=+2650.565637732" observedRunningTime="2026-01-20 04:33:24.492602667 +0000 UTC m=+2651.092390596" watchObservedRunningTime="2026-01-20 04:33:24.50038012 +0000 UTC m=+2651.100167989" Jan 20 04:33:28 crc kubenswrapper[4898]: I0120 04:33:28.721767 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:33:28 crc kubenswrapper[4898]: E0120 04:33:28.722878 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:33:30 crc kubenswrapper[4898]: I0120 04:33:30.626399 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:30 crc kubenswrapper[4898]: I0120 04:33:30.626759 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:30 crc kubenswrapper[4898]: I0120 04:33:30.695530 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:31 crc kubenswrapper[4898]: I0120 04:33:31.614604 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:31 crc kubenswrapper[4898]: I0120 04:33:31.692424 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwgzg"] Jan 20 04:33:33 crc kubenswrapper[4898]: I0120 04:33:33.573321 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zwgzg" podUID="624afea3-a0de-4d79-a179-aac4c942e723" containerName="registry-server" containerID="cri-o://8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35" gracePeriod=2 Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.094019 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.172782 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dnzp\" (UniqueName: \"kubernetes.io/projected/624afea3-a0de-4d79-a179-aac4c942e723-kube-api-access-5dnzp\") pod \"624afea3-a0de-4d79-a179-aac4c942e723\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.172845 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-catalog-content\") pod \"624afea3-a0de-4d79-a179-aac4c942e723\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.172869 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-utilities\") pod \"624afea3-a0de-4d79-a179-aac4c942e723\" (UID: \"624afea3-a0de-4d79-a179-aac4c942e723\") " Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.174092 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-utilities" (OuterVolumeSpecName: "utilities") pod "624afea3-a0de-4d79-a179-aac4c942e723" (UID: "624afea3-a0de-4d79-a179-aac4c942e723"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.181502 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/624afea3-a0de-4d79-a179-aac4c942e723-kube-api-access-5dnzp" (OuterVolumeSpecName: "kube-api-access-5dnzp") pod "624afea3-a0de-4d79-a179-aac4c942e723" (UID: "624afea3-a0de-4d79-a179-aac4c942e723"). InnerVolumeSpecName "kube-api-access-5dnzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.208076 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "624afea3-a0de-4d79-a179-aac4c942e723" (UID: "624afea3-a0de-4d79-a179-aac4c942e723"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.275811 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dnzp\" (UniqueName: \"kubernetes.io/projected/624afea3-a0de-4d79-a179-aac4c942e723-kube-api-access-5dnzp\") on node \"crc\" DevicePath \"\"" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.275847 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.275859 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624afea3-a0de-4d79-a179-aac4c942e723-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.584969 4898 generic.go:334] "Generic (PLEG): container finished" podID="624afea3-a0de-4d79-a179-aac4c942e723" containerID="8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35" exitCode=0 Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.585021 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwgzg" event={"ID":"624afea3-a0de-4d79-a179-aac4c942e723","Type":"ContainerDied","Data":"8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35"} Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.585070 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwgzg" event={"ID":"624afea3-a0de-4d79-a179-aac4c942e723","Type":"ContainerDied","Data":"7b5eab07ffee8828ae14517d4d89bbfe55ef327329276fb1a14144b92923fd6d"} Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.585089 4898 scope.go:117] "RemoveContainer" containerID="8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.585090 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwgzg" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.610058 4898 scope.go:117] "RemoveContainer" containerID="5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.638684 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwgzg"] Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.638741 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwgzg"] Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.640049 4898 scope.go:117] "RemoveContainer" containerID="d34675054d939a23568c68c64d5eb1adb5a01419812de9a1273755e63d8a3cfe" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.691270 4898 scope.go:117] "RemoveContainer" containerID="8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35" Jan 20 04:33:34 crc kubenswrapper[4898]: E0120 04:33:34.691818 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35\": container with ID starting with 8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35 not found: ID does not exist" containerID="8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.691863 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35"} err="failed to get container status \"8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35\": rpc error: code = NotFound desc = could not find container \"8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35\": container with ID starting with 8d1dabd5c34b6fd12d052d8cc4f1049088ff5a682adcb0f70752346ae208ea35 not found: ID does not exist" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.691901 4898 scope.go:117] "RemoveContainer" containerID="5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971" Jan 20 04:33:34 crc kubenswrapper[4898]: E0120 04:33:34.692330 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971\": container with ID starting with 5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971 not found: ID does not exist" containerID="5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.692351 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971"} err="failed to get container status \"5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971\": rpc error: code = NotFound desc = could not find container \"5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971\": container with ID starting with 5fb53ada41976b16859733483ebcf9d62d0e0ab8667c61e9abad2545e8229971 not found: ID does not exist" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.692364 4898 scope.go:117] "RemoveContainer" containerID="d34675054d939a23568c68c64d5eb1adb5a01419812de9a1273755e63d8a3cfe" Jan 20 04:33:34 crc kubenswrapper[4898]: E0120 04:33:34.692672 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d34675054d939a23568c68c64d5eb1adb5a01419812de9a1273755e63d8a3cfe\": container with ID starting with d34675054d939a23568c68c64d5eb1adb5a01419812de9a1273755e63d8a3cfe not found: ID does not exist" containerID="d34675054d939a23568c68c64d5eb1adb5a01419812de9a1273755e63d8a3cfe" Jan 20 04:33:34 crc kubenswrapper[4898]: I0120 04:33:34.692710 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34675054d939a23568c68c64d5eb1adb5a01419812de9a1273755e63d8a3cfe"} err="failed to get container status \"d34675054d939a23568c68c64d5eb1adb5a01419812de9a1273755e63d8a3cfe\": rpc error: code = NotFound desc = could not find container \"d34675054d939a23568c68c64d5eb1adb5a01419812de9a1273755e63d8a3cfe\": container with ID starting with d34675054d939a23568c68c64d5eb1adb5a01419812de9a1273755e63d8a3cfe not found: ID does not exist" Jan 20 04:33:35 crc kubenswrapper[4898]: I0120 04:33:35.731299 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="624afea3-a0de-4d79-a179-aac4c942e723" path="/var/lib/kubelet/pods/624afea3-a0de-4d79-a179-aac4c942e723/volumes" Jan 20 04:33:43 crc kubenswrapper[4898]: I0120 04:33:43.732699 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:33:43 crc kubenswrapper[4898]: E0120 04:33:43.733656 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:33:58 crc kubenswrapper[4898]: I0120 04:33:58.721828 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:33:58 crc kubenswrapper[4898]: E0120 04:33:58.722642 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:34:13 crc kubenswrapper[4898]: I0120 04:34:13.731860 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:34:15 crc kubenswrapper[4898]: I0120 04:34:15.018737 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"ddb74fa3d8a3c963ec85a4ef3d02a2d2265ded871ff63637402f5b05e95003b3"} Jan 20 04:35:09 crc kubenswrapper[4898]: I0120 04:35:09.797297 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-678fc"] Jan 20 04:35:09 crc kubenswrapper[4898]: E0120 04:35:09.798340 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624afea3-a0de-4d79-a179-aac4c942e723" containerName="extract-utilities" Jan 20 04:35:09 crc kubenswrapper[4898]: I0120 04:35:09.798357 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="624afea3-a0de-4d79-a179-aac4c942e723" containerName="extract-utilities" Jan 20 04:35:09 crc kubenswrapper[4898]: E0120 04:35:09.798376 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624afea3-a0de-4d79-a179-aac4c942e723" containerName="registry-server" Jan 20 04:35:09 crc kubenswrapper[4898]: I0120 04:35:09.798384 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="624afea3-a0de-4d79-a179-aac4c942e723" containerName="registry-server" Jan 20 04:35:09 crc kubenswrapper[4898]: E0120 04:35:09.798410 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624afea3-a0de-4d79-a179-aac4c942e723" containerName="extract-content" Jan 20 04:35:09 crc kubenswrapper[4898]: I0120 04:35:09.798418 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="624afea3-a0de-4d79-a179-aac4c942e723" containerName="extract-content" Jan 20 04:35:09 crc kubenswrapper[4898]: I0120 04:35:09.798665 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="624afea3-a0de-4d79-a179-aac4c942e723" containerName="registry-server" Jan 20 04:35:09 crc kubenswrapper[4898]: I0120 04:35:09.800274 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:09 crc kubenswrapper[4898]: I0120 04:35:09.822864 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-678fc"] Jan 20 04:35:09 crc kubenswrapper[4898]: I0120 04:35:09.935809 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-utilities\") pod \"redhat-operators-678fc\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:09 crc kubenswrapper[4898]: I0120 04:35:09.935870 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wm6c\" (UniqueName: \"kubernetes.io/projected/5616003b-281b-48c6-8ded-c4dfec275eab-kube-api-access-4wm6c\") pod \"redhat-operators-678fc\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:09 crc kubenswrapper[4898]: I0120 04:35:09.935961 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-catalog-content\") pod \"redhat-operators-678fc\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:10 crc kubenswrapper[4898]: I0120 04:35:10.038944 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-catalog-content\") pod \"redhat-operators-678fc\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:10 crc kubenswrapper[4898]: I0120 04:35:10.039138 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-utilities\") pod \"redhat-operators-678fc\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:10 crc kubenswrapper[4898]: I0120 04:35:10.039167 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wm6c\" (UniqueName: \"kubernetes.io/projected/5616003b-281b-48c6-8ded-c4dfec275eab-kube-api-access-4wm6c\") pod \"redhat-operators-678fc\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:10 crc kubenswrapper[4898]: I0120 04:35:10.039463 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-catalog-content\") pod \"redhat-operators-678fc\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:10 crc kubenswrapper[4898]: I0120 04:35:10.039675 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-utilities\") pod \"redhat-operators-678fc\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:10 crc kubenswrapper[4898]: I0120 04:35:10.060788 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wm6c\" (UniqueName: \"kubernetes.io/projected/5616003b-281b-48c6-8ded-c4dfec275eab-kube-api-access-4wm6c\") pod \"redhat-operators-678fc\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:10 crc kubenswrapper[4898]: I0120 04:35:10.140712 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:10 crc kubenswrapper[4898]: I0120 04:35:10.604161 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-678fc"] Jan 20 04:35:11 crc kubenswrapper[4898]: I0120 04:35:11.565281 4898 generic.go:334] "Generic (PLEG): container finished" podID="5616003b-281b-48c6-8ded-c4dfec275eab" containerID="99ea3198c2348c2d4a35864c985848358dd8ab5c951c224f10797ce345f724e7" exitCode=0 Jan 20 04:35:11 crc kubenswrapper[4898]: I0120 04:35:11.565321 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678fc" event={"ID":"5616003b-281b-48c6-8ded-c4dfec275eab","Type":"ContainerDied","Data":"99ea3198c2348c2d4a35864c985848358dd8ab5c951c224f10797ce345f724e7"} Jan 20 04:35:11 crc kubenswrapper[4898]: I0120 04:35:11.565616 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678fc" event={"ID":"5616003b-281b-48c6-8ded-c4dfec275eab","Type":"ContainerStarted","Data":"3591bf27ebe6b0666a7b0b48ebe77b1bf5d7f6b77a80019a168d59eadc0c3b8a"} Jan 20 04:35:12 crc kubenswrapper[4898]: I0120 04:35:12.575081 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678fc" event={"ID":"5616003b-281b-48c6-8ded-c4dfec275eab","Type":"ContainerStarted","Data":"f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d"} Jan 20 04:35:13 crc kubenswrapper[4898]: I0120 04:35:13.584075 4898 generic.go:334] "Generic (PLEG): container finished" podID="5616003b-281b-48c6-8ded-c4dfec275eab" containerID="f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d" exitCode=0 Jan 20 04:35:13 crc kubenswrapper[4898]: I0120 04:35:13.584169 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678fc" event={"ID":"5616003b-281b-48c6-8ded-c4dfec275eab","Type":"ContainerDied","Data":"f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d"} Jan 20 04:35:14 crc kubenswrapper[4898]: I0120 04:35:14.597466 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678fc" event={"ID":"5616003b-281b-48c6-8ded-c4dfec275eab","Type":"ContainerStarted","Data":"33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50"} Jan 20 04:35:14 crc kubenswrapper[4898]: I0120 04:35:14.621011 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-678fc" podStartSLOduration=2.975327049 podStartE2EDuration="5.620991769s" podCreationTimestamp="2026-01-20 04:35:09 +0000 UTC" firstStartedPulling="2026-01-20 04:35:11.567199993 +0000 UTC m=+2758.166987852" lastFinishedPulling="2026-01-20 04:35:14.212864713 +0000 UTC m=+2760.812652572" observedRunningTime="2026-01-20 04:35:14.61483835 +0000 UTC m=+2761.214626229" watchObservedRunningTime="2026-01-20 04:35:14.620991769 +0000 UTC m=+2761.220779648" Jan 20 04:35:20 crc kubenswrapper[4898]: I0120 04:35:20.141294 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:20 crc kubenswrapper[4898]: I0120 04:35:20.142587 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:20 crc kubenswrapper[4898]: I0120 04:35:20.187009 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:20 crc kubenswrapper[4898]: I0120 04:35:20.693842 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:20 crc kubenswrapper[4898]: I0120 04:35:20.752203 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-678fc"] Jan 20 04:35:22 crc kubenswrapper[4898]: I0120 04:35:22.658542 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-678fc" podUID="5616003b-281b-48c6-8ded-c4dfec275eab" containerName="registry-server" containerID="cri-o://33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50" gracePeriod=2 Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.184106 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.357634 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-catalog-content\") pod \"5616003b-281b-48c6-8ded-c4dfec275eab\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.357820 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-utilities\") pod \"5616003b-281b-48c6-8ded-c4dfec275eab\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.357904 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wm6c\" (UniqueName: \"kubernetes.io/projected/5616003b-281b-48c6-8ded-c4dfec275eab-kube-api-access-4wm6c\") pod \"5616003b-281b-48c6-8ded-c4dfec275eab\" (UID: \"5616003b-281b-48c6-8ded-c4dfec275eab\") " Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.360816 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-utilities" (OuterVolumeSpecName: "utilities") pod "5616003b-281b-48c6-8ded-c4dfec275eab" (UID: "5616003b-281b-48c6-8ded-c4dfec275eab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.370652 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5616003b-281b-48c6-8ded-c4dfec275eab-kube-api-access-4wm6c" (OuterVolumeSpecName: "kube-api-access-4wm6c") pod "5616003b-281b-48c6-8ded-c4dfec275eab" (UID: "5616003b-281b-48c6-8ded-c4dfec275eab"). InnerVolumeSpecName "kube-api-access-4wm6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.461422 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.461493 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wm6c\" (UniqueName: \"kubernetes.io/projected/5616003b-281b-48c6-8ded-c4dfec275eab-kube-api-access-4wm6c\") on node \"crc\" DevicePath \"\"" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.488856 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5616003b-281b-48c6-8ded-c4dfec275eab" (UID: "5616003b-281b-48c6-8ded-c4dfec275eab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.563782 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5616003b-281b-48c6-8ded-c4dfec275eab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.669516 4898 generic.go:334] "Generic (PLEG): container finished" podID="5616003b-281b-48c6-8ded-c4dfec275eab" containerID="33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50" exitCode=0 Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.669565 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678fc" event={"ID":"5616003b-281b-48c6-8ded-c4dfec275eab","Type":"ContainerDied","Data":"33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50"} Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.669599 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678fc" event={"ID":"5616003b-281b-48c6-8ded-c4dfec275eab","Type":"ContainerDied","Data":"3591bf27ebe6b0666a7b0b48ebe77b1bf5d7f6b77a80019a168d59eadc0c3b8a"} Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.669622 4898 scope.go:117] "RemoveContainer" containerID="33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.669749 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-678fc" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.702124 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-678fc"] Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.710313 4898 scope.go:117] "RemoveContainer" containerID="f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.711934 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-678fc"] Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.750184 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5616003b-281b-48c6-8ded-c4dfec275eab" path="/var/lib/kubelet/pods/5616003b-281b-48c6-8ded-c4dfec275eab/volumes" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.779520 4898 scope.go:117] "RemoveContainer" containerID="99ea3198c2348c2d4a35864c985848358dd8ab5c951c224f10797ce345f724e7" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.823612 4898 scope.go:117] "RemoveContainer" containerID="33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50" Jan 20 04:35:23 crc kubenswrapper[4898]: E0120 04:35:23.824122 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50\": container with ID starting with 33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50 not found: ID does not exist" containerID="33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.824160 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50"} err="failed to get container status \"33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50\": rpc error: code = NotFound desc = could not find container \"33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50\": container with ID starting with 33325ddda6bf5f59b2dbe1d4461931b42e67ecc006beda01f8a8bd70986cfc50 not found: ID does not exist" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.824183 4898 scope.go:117] "RemoveContainer" containerID="f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d" Jan 20 04:35:23 crc kubenswrapper[4898]: E0120 04:35:23.824450 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d\": container with ID starting with f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d not found: ID does not exist" containerID="f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.824476 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d"} err="failed to get container status \"f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d\": rpc error: code = NotFound desc = could not find container \"f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d\": container with ID starting with f61bd6cee8f9c2550aec15fc6e2217d9956263ea1e3c2b828d3519f8c47fc63d not found: ID does not exist" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.824490 4898 scope.go:117] "RemoveContainer" containerID="99ea3198c2348c2d4a35864c985848358dd8ab5c951c224f10797ce345f724e7" Jan 20 04:35:23 crc kubenswrapper[4898]: E0120 04:35:23.824712 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99ea3198c2348c2d4a35864c985848358dd8ab5c951c224f10797ce345f724e7\": container with ID starting with 99ea3198c2348c2d4a35864c985848358dd8ab5c951c224f10797ce345f724e7 not found: ID does not exist" containerID="99ea3198c2348c2d4a35864c985848358dd8ab5c951c224f10797ce345f724e7" Jan 20 04:35:23 crc kubenswrapper[4898]: I0120 04:35:23.824735 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99ea3198c2348c2d4a35864c985848358dd8ab5c951c224f10797ce345f724e7"} err="failed to get container status \"99ea3198c2348c2d4a35864c985848358dd8ab5c951c224f10797ce345f724e7\": rpc error: code = NotFound desc = could not find container \"99ea3198c2348c2d4a35864c985848358dd8ab5c951c224f10797ce345f724e7\": container with ID starting with 99ea3198c2348c2d4a35864c985848358dd8ab5c951c224f10797ce345f724e7 not found: ID does not exist" Jan 20 04:36:39 crc kubenswrapper[4898]: I0120 04:36:39.975746 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:36:39 crc kubenswrapper[4898]: I0120 04:36:39.976333 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:37:09 crc kubenswrapper[4898]: I0120 04:37:09.977189 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:37:09 crc kubenswrapper[4898]: I0120 04:37:09.978417 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.288411 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v5q7f"] Jan 20 04:37:16 crc kubenswrapper[4898]: E0120 04:37:16.294073 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5616003b-281b-48c6-8ded-c4dfec275eab" containerName="extract-utilities" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.294173 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="5616003b-281b-48c6-8ded-c4dfec275eab" containerName="extract-utilities" Jan 20 04:37:16 crc kubenswrapper[4898]: E0120 04:37:16.294241 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5616003b-281b-48c6-8ded-c4dfec275eab" containerName="registry-server" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.294295 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="5616003b-281b-48c6-8ded-c4dfec275eab" containerName="registry-server" Jan 20 04:37:16 crc kubenswrapper[4898]: E0120 04:37:16.294361 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5616003b-281b-48c6-8ded-c4dfec275eab" containerName="extract-content" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.294413 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="5616003b-281b-48c6-8ded-c4dfec275eab" containerName="extract-content" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.296999 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="5616003b-281b-48c6-8ded-c4dfec275eab" containerName="registry-server" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.298496 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.314345 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5q7f"] Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.402369 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-catalog-content\") pod \"community-operators-v5q7f\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.402416 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpb2n\" (UniqueName: \"kubernetes.io/projected/a486ef4b-92bf-4a6b-901f-876cfbb055ea-kube-api-access-xpb2n\") pod \"community-operators-v5q7f\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.402529 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-utilities\") pod \"community-operators-v5q7f\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.504771 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-catalog-content\") pod \"community-operators-v5q7f\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.504817 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpb2n\" (UniqueName: \"kubernetes.io/projected/a486ef4b-92bf-4a6b-901f-876cfbb055ea-kube-api-access-xpb2n\") pod \"community-operators-v5q7f\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.504874 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-utilities\") pod \"community-operators-v5q7f\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.505287 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-catalog-content\") pod \"community-operators-v5q7f\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.505299 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-utilities\") pod \"community-operators-v5q7f\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.530952 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpb2n\" (UniqueName: \"kubernetes.io/projected/a486ef4b-92bf-4a6b-901f-876cfbb055ea-kube-api-access-xpb2n\") pod \"community-operators-v5q7f\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:16 crc kubenswrapper[4898]: I0120 04:37:16.670974 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:17 crc kubenswrapper[4898]: I0120 04:37:17.207579 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5q7f"] Jan 20 04:37:17 crc kubenswrapper[4898]: W0120 04:37:17.211858 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda486ef4b_92bf_4a6b_901f_876cfbb055ea.slice/crio-a6bb6c1b84ee29f20ec7a264ef42c7b92c841f7f39977f4a44cbb160ff825082 WatchSource:0}: Error finding container a6bb6c1b84ee29f20ec7a264ef42c7b92c841f7f39977f4a44cbb160ff825082: Status 404 returned error can't find the container with id a6bb6c1b84ee29f20ec7a264ef42c7b92c841f7f39977f4a44cbb160ff825082 Jan 20 04:37:17 crc kubenswrapper[4898]: I0120 04:37:17.835029 4898 generic.go:334] "Generic (PLEG): container finished" podID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" containerID="8c68f50ba5f5e984667ed2ac17444546f72ff3724fe42decf10be1e26450b476" exitCode=0 Jan 20 04:37:17 crc kubenswrapper[4898]: I0120 04:37:17.835271 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5q7f" event={"ID":"a486ef4b-92bf-4a6b-901f-876cfbb055ea","Type":"ContainerDied","Data":"8c68f50ba5f5e984667ed2ac17444546f72ff3724fe42decf10be1e26450b476"} Jan 20 04:37:17 crc kubenswrapper[4898]: I0120 04:37:17.835493 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5q7f" event={"ID":"a486ef4b-92bf-4a6b-901f-876cfbb055ea","Type":"ContainerStarted","Data":"a6bb6c1b84ee29f20ec7a264ef42c7b92c841f7f39977f4a44cbb160ff825082"} Jan 20 04:37:18 crc kubenswrapper[4898]: I0120 04:37:18.843942 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5q7f" event={"ID":"a486ef4b-92bf-4a6b-901f-876cfbb055ea","Type":"ContainerStarted","Data":"7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7"} Jan 20 04:37:19 crc kubenswrapper[4898]: I0120 04:37:19.852508 4898 generic.go:334] "Generic (PLEG): container finished" podID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" containerID="7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7" exitCode=0 Jan 20 04:37:19 crc kubenswrapper[4898]: I0120 04:37:19.852616 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5q7f" event={"ID":"a486ef4b-92bf-4a6b-901f-876cfbb055ea","Type":"ContainerDied","Data":"7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7"} Jan 20 04:37:20 crc kubenswrapper[4898]: I0120 04:37:20.866846 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5q7f" event={"ID":"a486ef4b-92bf-4a6b-901f-876cfbb055ea","Type":"ContainerStarted","Data":"7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599"} Jan 20 04:37:20 crc kubenswrapper[4898]: I0120 04:37:20.898288 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v5q7f" podStartSLOduration=2.3919882279999998 podStartE2EDuration="4.898266408s" podCreationTimestamp="2026-01-20 04:37:16 +0000 UTC" firstStartedPulling="2026-01-20 04:37:17.83757884 +0000 UTC m=+2884.437366699" lastFinishedPulling="2026-01-20 04:37:20.34385702 +0000 UTC m=+2886.943644879" observedRunningTime="2026-01-20 04:37:20.895369219 +0000 UTC m=+2887.495157098" watchObservedRunningTime="2026-01-20 04:37:20.898266408 +0000 UTC m=+2887.498054267" Jan 20 04:37:26 crc kubenswrapper[4898]: I0120 04:37:26.671494 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:26 crc kubenswrapper[4898]: I0120 04:37:26.671792 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:26 crc kubenswrapper[4898]: I0120 04:37:26.710241 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:26 crc kubenswrapper[4898]: I0120 04:37:26.973529 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:27 crc kubenswrapper[4898]: I0120 04:37:27.027297 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5q7f"] Jan 20 04:37:28 crc kubenswrapper[4898]: I0120 04:37:28.943051 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v5q7f" podUID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" containerName="registry-server" containerID="cri-o://7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599" gracePeriod=2 Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.381921 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.391980 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-catalog-content\") pod \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.392332 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpb2n\" (UniqueName: \"kubernetes.io/projected/a486ef4b-92bf-4a6b-901f-876cfbb055ea-kube-api-access-xpb2n\") pod \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.392697 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-utilities\") pod \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\" (UID: \"a486ef4b-92bf-4a6b-901f-876cfbb055ea\") " Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.394269 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-utilities" (OuterVolumeSpecName: "utilities") pod "a486ef4b-92bf-4a6b-901f-876cfbb055ea" (UID: "a486ef4b-92bf-4a6b-901f-876cfbb055ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.399302 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a486ef4b-92bf-4a6b-901f-876cfbb055ea-kube-api-access-xpb2n" (OuterVolumeSpecName: "kube-api-access-xpb2n") pod "a486ef4b-92bf-4a6b-901f-876cfbb055ea" (UID: "a486ef4b-92bf-4a6b-901f-876cfbb055ea"). InnerVolumeSpecName "kube-api-access-xpb2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.463138 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a486ef4b-92bf-4a6b-901f-876cfbb055ea" (UID: "a486ef4b-92bf-4a6b-901f-876cfbb055ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.496160 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.496193 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpb2n\" (UniqueName: \"kubernetes.io/projected/a486ef4b-92bf-4a6b-901f-876cfbb055ea-kube-api-access-xpb2n\") on node \"crc\" DevicePath \"\"" Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.496209 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a486ef4b-92bf-4a6b-901f-876cfbb055ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.956766 4898 generic.go:334] "Generic (PLEG): container finished" podID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" containerID="7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599" exitCode=0 Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.956924 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5q7f" Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.958882 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5q7f" event={"ID":"a486ef4b-92bf-4a6b-901f-876cfbb055ea","Type":"ContainerDied","Data":"7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599"} Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.959160 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5q7f" event={"ID":"a486ef4b-92bf-4a6b-901f-876cfbb055ea","Type":"ContainerDied","Data":"a6bb6c1b84ee29f20ec7a264ef42c7b92c841f7f39977f4a44cbb160ff825082"} Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.959307 4898 scope.go:117] "RemoveContainer" containerID="7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599" Jan 20 04:37:29 crc kubenswrapper[4898]: I0120 04:37:29.996542 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5q7f"] Jan 20 04:37:30 crc kubenswrapper[4898]: I0120 04:37:30.002292 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v5q7f"] Jan 20 04:37:30 crc kubenswrapper[4898]: I0120 04:37:30.003260 4898 scope.go:117] "RemoveContainer" containerID="7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7" Jan 20 04:37:30 crc kubenswrapper[4898]: I0120 04:37:30.035554 4898 scope.go:117] "RemoveContainer" containerID="8c68f50ba5f5e984667ed2ac17444546f72ff3724fe42decf10be1e26450b476" Jan 20 04:37:30 crc kubenswrapper[4898]: I0120 04:37:30.081805 4898 scope.go:117] "RemoveContainer" containerID="7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599" Jan 20 04:37:30 crc kubenswrapper[4898]: E0120 04:37:30.082697 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599\": container with ID starting with 7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599 not found: ID does not exist" containerID="7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599" Jan 20 04:37:30 crc kubenswrapper[4898]: I0120 04:37:30.082796 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599"} err="failed to get container status \"7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599\": rpc error: code = NotFound desc = could not find container \"7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599\": container with ID starting with 7ab3e1cd5d338420a38a62884656f941c94948c4988645a74ec5eba80db50599 not found: ID does not exist" Jan 20 04:37:30 crc kubenswrapper[4898]: I0120 04:37:30.082853 4898 scope.go:117] "RemoveContainer" containerID="7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7" Jan 20 04:37:30 crc kubenswrapper[4898]: E0120 04:37:30.083792 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7\": container with ID starting with 7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7 not found: ID does not exist" containerID="7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7" Jan 20 04:37:30 crc kubenswrapper[4898]: I0120 04:37:30.083835 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7"} err="failed to get container status \"7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7\": rpc error: code = NotFound desc = could not find container \"7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7\": container with ID starting with 7a26478b19c0353c206029e3059f3f26c86ea104671feaca81f0cdb01a42a5d7 not found: ID does not exist" Jan 20 04:37:30 crc kubenswrapper[4898]: I0120 04:37:30.083864 4898 scope.go:117] "RemoveContainer" containerID="8c68f50ba5f5e984667ed2ac17444546f72ff3724fe42decf10be1e26450b476" Jan 20 04:37:30 crc kubenswrapper[4898]: E0120 04:37:30.084579 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c68f50ba5f5e984667ed2ac17444546f72ff3724fe42decf10be1e26450b476\": container with ID starting with 8c68f50ba5f5e984667ed2ac17444546f72ff3724fe42decf10be1e26450b476 not found: ID does not exist" containerID="8c68f50ba5f5e984667ed2ac17444546f72ff3724fe42decf10be1e26450b476" Jan 20 04:37:30 crc kubenswrapper[4898]: I0120 04:37:30.084661 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c68f50ba5f5e984667ed2ac17444546f72ff3724fe42decf10be1e26450b476"} err="failed to get container status \"8c68f50ba5f5e984667ed2ac17444546f72ff3724fe42decf10be1e26450b476\": rpc error: code = NotFound desc = could not find container \"8c68f50ba5f5e984667ed2ac17444546f72ff3724fe42decf10be1e26450b476\": container with ID starting with 8c68f50ba5f5e984667ed2ac17444546f72ff3724fe42decf10be1e26450b476 not found: ID does not exist" Jan 20 04:37:31 crc kubenswrapper[4898]: I0120 04:37:31.732070 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" path="/var/lib/kubelet/pods/a486ef4b-92bf-4a6b-901f-876cfbb055ea/volumes" Jan 20 04:37:39 crc kubenswrapper[4898]: I0120 04:37:39.976164 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:37:39 crc kubenswrapper[4898]: I0120 04:37:39.977306 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:37:39 crc kubenswrapper[4898]: I0120 04:37:39.977397 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 04:37:39 crc kubenswrapper[4898]: I0120 04:37:39.979074 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ddb74fa3d8a3c963ec85a4ef3d02a2d2265ded871ff63637402f5b05e95003b3"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 04:37:39 crc kubenswrapper[4898]: I0120 04:37:39.979212 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://ddb74fa3d8a3c963ec85a4ef3d02a2d2265ded871ff63637402f5b05e95003b3" gracePeriod=600 Jan 20 04:37:41 crc kubenswrapper[4898]: I0120 04:37:41.079542 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="ddb74fa3d8a3c963ec85a4ef3d02a2d2265ded871ff63637402f5b05e95003b3" exitCode=0 Jan 20 04:37:41 crc kubenswrapper[4898]: I0120 04:37:41.079628 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"ddb74fa3d8a3c963ec85a4ef3d02a2d2265ded871ff63637402f5b05e95003b3"} Jan 20 04:37:41 crc kubenswrapper[4898]: I0120 04:37:41.080252 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff"} Jan 20 04:37:41 crc kubenswrapper[4898]: I0120 04:37:41.080274 4898 scope.go:117] "RemoveContainer" containerID="e263f7f9029761dede67b9f3342cc48f514fc4483547379bd6e4659bfa68ac84" Jan 20 04:38:18 crc kubenswrapper[4898]: I0120 04:38:18.592108 4898 scope.go:117] "RemoveContainer" containerID="c25ee2b0f0e59d7e37bdc6a5f998d9974d14131ad0add2731958599ca6d45076" Jan 20 04:38:18 crc kubenswrapper[4898]: I0120 04:38:18.627845 4898 scope.go:117] "RemoveContainer" containerID="2c79dc608a734cfec2a60c66c1f63067da61afb12f059b7951d56f425abd8545" Jan 20 04:38:18 crc kubenswrapper[4898]: I0120 04:38:18.665962 4898 scope.go:117] "RemoveContainer" containerID="f18f1ed1e0213755c3714df16f23f8c6f51157d6e0ce20699d3bb956f681f59c" Jan 20 04:40:09 crc kubenswrapper[4898]: I0120 04:40:09.976266 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:40:09 crc kubenswrapper[4898]: I0120 04:40:09.976987 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:40:39 crc kubenswrapper[4898]: I0120 04:40:39.975961 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:40:39 crc kubenswrapper[4898]: I0120 04:40:39.978187 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:41:09 crc kubenswrapper[4898]: I0120 04:41:09.976244 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:41:09 crc kubenswrapper[4898]: I0120 04:41:09.977055 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:41:09 crc kubenswrapper[4898]: I0120 04:41:09.977147 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 04:41:09 crc kubenswrapper[4898]: I0120 04:41:09.978530 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 04:41:09 crc kubenswrapper[4898]: I0120 04:41:09.978653 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" gracePeriod=600 Jan 20 04:41:10 crc kubenswrapper[4898]: E0120 04:41:10.108762 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:41:10 crc kubenswrapper[4898]: I0120 04:41:10.142558 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" exitCode=0 Jan 20 04:41:10 crc kubenswrapper[4898]: I0120 04:41:10.142614 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff"} Jan 20 04:41:10 crc kubenswrapper[4898]: I0120 04:41:10.142676 4898 scope.go:117] "RemoveContainer" containerID="ddb74fa3d8a3c963ec85a4ef3d02a2d2265ded871ff63637402f5b05e95003b3" Jan 20 04:41:10 crc kubenswrapper[4898]: I0120 04:41:10.143237 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:41:10 crc kubenswrapper[4898]: E0120 04:41:10.143548 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:41:20 crc kubenswrapper[4898]: I0120 04:41:20.721696 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:41:20 crc kubenswrapper[4898]: E0120 04:41:20.722974 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:41:35 crc kubenswrapper[4898]: I0120 04:41:35.721582 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:41:35 crc kubenswrapper[4898]: E0120 04:41:35.722079 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:41:49 crc kubenswrapper[4898]: I0120 04:41:49.721960 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:41:49 crc kubenswrapper[4898]: E0120 04:41:49.723248 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:42:04 crc kubenswrapper[4898]: I0120 04:42:04.721810 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:42:04 crc kubenswrapper[4898]: E0120 04:42:04.723017 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:42:17 crc kubenswrapper[4898]: I0120 04:42:17.721054 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:42:17 crc kubenswrapper[4898]: E0120 04:42:17.721771 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:42:28 crc kubenswrapper[4898]: I0120 04:42:28.721742 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:42:28 crc kubenswrapper[4898]: E0120 04:42:28.722662 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:42:40 crc kubenswrapper[4898]: I0120 04:42:40.722211 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:42:40 crc kubenswrapper[4898]: E0120 04:42:40.723071 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:42:51 crc kubenswrapper[4898]: I0120 04:42:51.721856 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:42:51 crc kubenswrapper[4898]: E0120 04:42:51.723287 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:42:58 crc kubenswrapper[4898]: I0120 04:42:58.828762 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vnd8p"] Jan 20 04:42:58 crc kubenswrapper[4898]: E0120 04:42:58.830250 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" containerName="extract-content" Jan 20 04:42:58 crc kubenswrapper[4898]: I0120 04:42:58.830268 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" containerName="extract-content" Jan 20 04:42:58 crc kubenswrapper[4898]: E0120 04:42:58.830286 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" containerName="extract-utilities" Jan 20 04:42:58 crc kubenswrapper[4898]: I0120 04:42:58.830292 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" containerName="extract-utilities" Jan 20 04:42:58 crc kubenswrapper[4898]: E0120 04:42:58.830318 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" containerName="registry-server" Jan 20 04:42:58 crc kubenswrapper[4898]: I0120 04:42:58.830323 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" containerName="registry-server" Jan 20 04:42:58 crc kubenswrapper[4898]: I0120 04:42:58.830610 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="a486ef4b-92bf-4a6b-901f-876cfbb055ea" containerName="registry-server" Jan 20 04:42:58 crc kubenswrapper[4898]: I0120 04:42:58.832543 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:42:58 crc kubenswrapper[4898]: I0120 04:42:58.853807 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vnd8p"] Jan 20 04:42:59 crc kubenswrapper[4898]: I0120 04:42:59.004148 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5bxs\" (UniqueName: \"kubernetes.io/projected/8cf3eddf-48aa-44ec-b26e-f351695859df-kube-api-access-p5bxs\") pod \"certified-operators-vnd8p\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:42:59 crc kubenswrapper[4898]: I0120 04:42:59.004739 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-utilities\") pod \"certified-operators-vnd8p\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:42:59 crc kubenswrapper[4898]: I0120 04:42:59.004849 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-catalog-content\") pod \"certified-operators-vnd8p\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:42:59 crc kubenswrapper[4898]: I0120 04:42:59.107212 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-utilities\") pod \"certified-operators-vnd8p\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:42:59 crc kubenswrapper[4898]: I0120 04:42:59.107290 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-catalog-content\") pod \"certified-operators-vnd8p\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:42:59 crc kubenswrapper[4898]: I0120 04:42:59.107473 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5bxs\" (UniqueName: \"kubernetes.io/projected/8cf3eddf-48aa-44ec-b26e-f351695859df-kube-api-access-p5bxs\") pod \"certified-operators-vnd8p\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:42:59 crc kubenswrapper[4898]: I0120 04:42:59.107802 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-utilities\") pod \"certified-operators-vnd8p\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:42:59 crc kubenswrapper[4898]: I0120 04:42:59.107900 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-catalog-content\") pod \"certified-operators-vnd8p\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:42:59 crc kubenswrapper[4898]: I0120 04:42:59.126304 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5bxs\" (UniqueName: \"kubernetes.io/projected/8cf3eddf-48aa-44ec-b26e-f351695859df-kube-api-access-p5bxs\") pod \"certified-operators-vnd8p\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:42:59 crc kubenswrapper[4898]: I0120 04:42:59.173502 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:42:59 crc kubenswrapper[4898]: I0120 04:42:59.675514 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vnd8p"] Jan 20 04:43:00 crc kubenswrapper[4898]: I0120 04:43:00.017507 4898 generic.go:334] "Generic (PLEG): container finished" podID="8cf3eddf-48aa-44ec-b26e-f351695859df" containerID="b9be4dc59c372c4d652dc2478b9881fe876a8cf02664b9031ed922591952a2a1" exitCode=0 Jan 20 04:43:00 crc kubenswrapper[4898]: I0120 04:43:00.017644 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnd8p" event={"ID":"8cf3eddf-48aa-44ec-b26e-f351695859df","Type":"ContainerDied","Data":"b9be4dc59c372c4d652dc2478b9881fe876a8cf02664b9031ed922591952a2a1"} Jan 20 04:43:00 crc kubenswrapper[4898]: I0120 04:43:00.017929 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnd8p" event={"ID":"8cf3eddf-48aa-44ec-b26e-f351695859df","Type":"ContainerStarted","Data":"1e3a7951694257a36a8493ccbccd51bc81b46063bd24d086c7bf660ead271e36"} Jan 20 04:43:00 crc kubenswrapper[4898]: I0120 04:43:00.020900 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 04:43:02 crc kubenswrapper[4898]: I0120 04:43:02.043491 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnd8p" event={"ID":"8cf3eddf-48aa-44ec-b26e-f351695859df","Type":"ContainerStarted","Data":"d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7"} Jan 20 04:43:03 crc kubenswrapper[4898]: I0120 04:43:03.057812 4898 generic.go:334] "Generic (PLEG): container finished" podID="8cf3eddf-48aa-44ec-b26e-f351695859df" containerID="d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7" exitCode=0 Jan 20 04:43:03 crc kubenswrapper[4898]: I0120 04:43:03.057880 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnd8p" event={"ID":"8cf3eddf-48aa-44ec-b26e-f351695859df","Type":"ContainerDied","Data":"d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7"} Jan 20 04:43:03 crc kubenswrapper[4898]: I0120 04:43:03.733612 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:43:03 crc kubenswrapper[4898]: E0120 04:43:03.734529 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:43:04 crc kubenswrapper[4898]: I0120 04:43:04.072345 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnd8p" event={"ID":"8cf3eddf-48aa-44ec-b26e-f351695859df","Type":"ContainerStarted","Data":"ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1"} Jan 20 04:43:09 crc kubenswrapper[4898]: I0120 04:43:09.174389 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:43:09 crc kubenswrapper[4898]: I0120 04:43:09.175088 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:43:09 crc kubenswrapper[4898]: I0120 04:43:09.240516 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:43:09 crc kubenswrapper[4898]: I0120 04:43:09.265385 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vnd8p" podStartSLOduration=7.779767876 podStartE2EDuration="11.265337755s" podCreationTimestamp="2026-01-20 04:42:58 +0000 UTC" firstStartedPulling="2026-01-20 04:43:00.020323253 +0000 UTC m=+3226.620111152" lastFinishedPulling="2026-01-20 04:43:03.505893162 +0000 UTC m=+3230.105681031" observedRunningTime="2026-01-20 04:43:04.100976152 +0000 UTC m=+3230.700764051" watchObservedRunningTime="2026-01-20 04:43:09.265337755 +0000 UTC m=+3235.865125624" Jan 20 04:43:10 crc kubenswrapper[4898]: I0120 04:43:10.189937 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:43:10 crc kubenswrapper[4898]: I0120 04:43:10.266931 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vnd8p"] Jan 20 04:43:12 crc kubenswrapper[4898]: I0120 04:43:12.147017 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vnd8p" podUID="8cf3eddf-48aa-44ec-b26e-f351695859df" containerName="registry-server" containerID="cri-o://ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1" gracePeriod=2 Jan 20 04:43:12 crc kubenswrapper[4898]: I0120 04:43:12.650237 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:43:12 crc kubenswrapper[4898]: I0120 04:43:12.696539 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-catalog-content\") pod \"8cf3eddf-48aa-44ec-b26e-f351695859df\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " Jan 20 04:43:12 crc kubenswrapper[4898]: I0120 04:43:12.696613 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-utilities\") pod \"8cf3eddf-48aa-44ec-b26e-f351695859df\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " Jan 20 04:43:12 crc kubenswrapper[4898]: I0120 04:43:12.696788 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5bxs\" (UniqueName: \"kubernetes.io/projected/8cf3eddf-48aa-44ec-b26e-f351695859df-kube-api-access-p5bxs\") pod \"8cf3eddf-48aa-44ec-b26e-f351695859df\" (UID: \"8cf3eddf-48aa-44ec-b26e-f351695859df\") " Jan 20 04:43:12 crc kubenswrapper[4898]: I0120 04:43:12.697725 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-utilities" (OuterVolumeSpecName: "utilities") pod "8cf3eddf-48aa-44ec-b26e-f351695859df" (UID: "8cf3eddf-48aa-44ec-b26e-f351695859df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:43:12 crc kubenswrapper[4898]: I0120 04:43:12.703376 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf3eddf-48aa-44ec-b26e-f351695859df-kube-api-access-p5bxs" (OuterVolumeSpecName: "kube-api-access-p5bxs") pod "8cf3eddf-48aa-44ec-b26e-f351695859df" (UID: "8cf3eddf-48aa-44ec-b26e-f351695859df"). InnerVolumeSpecName "kube-api-access-p5bxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:43:12 crc kubenswrapper[4898]: I0120 04:43:12.746993 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cf3eddf-48aa-44ec-b26e-f351695859df" (UID: "8cf3eddf-48aa-44ec-b26e-f351695859df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:43:12 crc kubenswrapper[4898]: I0120 04:43:12.799347 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5bxs\" (UniqueName: \"kubernetes.io/projected/8cf3eddf-48aa-44ec-b26e-f351695859df-kube-api-access-p5bxs\") on node \"crc\" DevicePath \"\"" Jan 20 04:43:12 crc kubenswrapper[4898]: I0120 04:43:12.799474 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:43:12 crc kubenswrapper[4898]: I0120 04:43:12.799561 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cf3eddf-48aa-44ec-b26e-f351695859df-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.160093 4898 generic.go:334] "Generic (PLEG): container finished" podID="8cf3eddf-48aa-44ec-b26e-f351695859df" containerID="ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1" exitCode=0 Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.160210 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnd8p" Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.160212 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnd8p" event={"ID":"8cf3eddf-48aa-44ec-b26e-f351695859df","Type":"ContainerDied","Data":"ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1"} Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.160820 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnd8p" event={"ID":"8cf3eddf-48aa-44ec-b26e-f351695859df","Type":"ContainerDied","Data":"1e3a7951694257a36a8493ccbccd51bc81b46063bd24d086c7bf660ead271e36"} Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.160856 4898 scope.go:117] "RemoveContainer" containerID="ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1" Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.189490 4898 scope.go:117] "RemoveContainer" containerID="d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7" Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.212651 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vnd8p"] Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.219068 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vnd8p"] Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.227978 4898 scope.go:117] "RemoveContainer" containerID="b9be4dc59c372c4d652dc2478b9881fe876a8cf02664b9031ed922591952a2a1" Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.279343 4898 scope.go:117] "RemoveContainer" containerID="ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1" Jan 20 04:43:13 crc kubenswrapper[4898]: E0120 04:43:13.281471 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1\": container with ID starting with ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1 not found: ID does not exist" containerID="ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1" Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.281526 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1"} err="failed to get container status \"ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1\": rpc error: code = NotFound desc = could not find container \"ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1\": container with ID starting with ecd24aa18f938e862a5ae59295de80aae2a785b62f56e73a5500175da7522ba1 not found: ID does not exist" Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.281565 4898 scope.go:117] "RemoveContainer" containerID="d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7" Jan 20 04:43:13 crc kubenswrapper[4898]: E0120 04:43:13.282153 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7\": container with ID starting with d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7 not found: ID does not exist" containerID="d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7" Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.282214 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7"} err="failed to get container status \"d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7\": rpc error: code = NotFound desc = could not find container \"d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7\": container with ID starting with d68cd4a226f2f8dae61cd020ebd33fe2c7d3b1225c4d7941d611fde60a5a17f7 not found: ID does not exist" Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.282248 4898 scope.go:117] "RemoveContainer" containerID="b9be4dc59c372c4d652dc2478b9881fe876a8cf02664b9031ed922591952a2a1" Jan 20 04:43:13 crc kubenswrapper[4898]: E0120 04:43:13.282741 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9be4dc59c372c4d652dc2478b9881fe876a8cf02664b9031ed922591952a2a1\": container with ID starting with b9be4dc59c372c4d652dc2478b9881fe876a8cf02664b9031ed922591952a2a1 not found: ID does not exist" containerID="b9be4dc59c372c4d652dc2478b9881fe876a8cf02664b9031ed922591952a2a1" Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.282778 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9be4dc59c372c4d652dc2478b9881fe876a8cf02664b9031ed922591952a2a1"} err="failed to get container status \"b9be4dc59c372c4d652dc2478b9881fe876a8cf02664b9031ed922591952a2a1\": rpc error: code = NotFound desc = could not find container \"b9be4dc59c372c4d652dc2478b9881fe876a8cf02664b9031ed922591952a2a1\": container with ID starting with b9be4dc59c372c4d652dc2478b9881fe876a8cf02664b9031ed922591952a2a1 not found: ID does not exist" Jan 20 04:43:13 crc kubenswrapper[4898]: I0120 04:43:13.763069 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf3eddf-48aa-44ec-b26e-f351695859df" path="/var/lib/kubelet/pods/8cf3eddf-48aa-44ec-b26e-f351695859df/volumes" Jan 20 04:43:17 crc kubenswrapper[4898]: I0120 04:43:17.722284 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:43:17 crc kubenswrapper[4898]: E0120 04:43:17.723516 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:43:32 crc kubenswrapper[4898]: I0120 04:43:32.721837 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:43:32 crc kubenswrapper[4898]: E0120 04:43:32.722728 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:43:44 crc kubenswrapper[4898]: I0120 04:43:44.722105 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:43:44 crc kubenswrapper[4898]: E0120 04:43:44.723742 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:43:59 crc kubenswrapper[4898]: I0120 04:43:59.721276 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:43:59 crc kubenswrapper[4898]: E0120 04:43:59.723586 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.334520 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gzs45"] Jan 20 04:44:09 crc kubenswrapper[4898]: E0120 04:44:09.335287 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf3eddf-48aa-44ec-b26e-f351695859df" containerName="registry-server" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.335298 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf3eddf-48aa-44ec-b26e-f351695859df" containerName="registry-server" Jan 20 04:44:09 crc kubenswrapper[4898]: E0120 04:44:09.335314 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf3eddf-48aa-44ec-b26e-f351695859df" containerName="extract-content" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.335320 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf3eddf-48aa-44ec-b26e-f351695859df" containerName="extract-content" Jan 20 04:44:09 crc kubenswrapper[4898]: E0120 04:44:09.335344 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf3eddf-48aa-44ec-b26e-f351695859df" containerName="extract-utilities" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.335351 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf3eddf-48aa-44ec-b26e-f351695859df" containerName="extract-utilities" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.335570 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf3eddf-48aa-44ec-b26e-f351695859df" containerName="registry-server" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.336806 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.390877 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzs45"] Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.456476 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84ngj\" (UniqueName: \"kubernetes.io/projected/c9886ee0-b599-4b06-b919-c34e2b31dcb4-kube-api-access-84ngj\") pod \"redhat-marketplace-gzs45\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.456633 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-catalog-content\") pod \"redhat-marketplace-gzs45\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.456657 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-utilities\") pod \"redhat-marketplace-gzs45\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.558745 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84ngj\" (UniqueName: \"kubernetes.io/projected/c9886ee0-b599-4b06-b919-c34e2b31dcb4-kube-api-access-84ngj\") pod \"redhat-marketplace-gzs45\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.558885 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-catalog-content\") pod \"redhat-marketplace-gzs45\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.558912 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-utilities\") pod \"redhat-marketplace-gzs45\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.559394 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-catalog-content\") pod \"redhat-marketplace-gzs45\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.559465 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-utilities\") pod \"redhat-marketplace-gzs45\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.582933 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84ngj\" (UniqueName: \"kubernetes.io/projected/c9886ee0-b599-4b06-b919-c34e2b31dcb4-kube-api-access-84ngj\") pod \"redhat-marketplace-gzs45\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:09 crc kubenswrapper[4898]: I0120 04:44:09.674851 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:10 crc kubenswrapper[4898]: I0120 04:44:10.192416 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzs45"] Jan 20 04:44:10 crc kubenswrapper[4898]: I0120 04:44:10.748316 4898 generic.go:334] "Generic (PLEG): container finished" podID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" containerID="90196fa36e38b44600942adac3a2a59f245eedf5002c8b3579eda6424c5331b4" exitCode=0 Jan 20 04:44:10 crc kubenswrapper[4898]: I0120 04:44:10.748362 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzs45" event={"ID":"c9886ee0-b599-4b06-b919-c34e2b31dcb4","Type":"ContainerDied","Data":"90196fa36e38b44600942adac3a2a59f245eedf5002c8b3579eda6424c5331b4"} Jan 20 04:44:10 crc kubenswrapper[4898]: I0120 04:44:10.748399 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzs45" event={"ID":"c9886ee0-b599-4b06-b919-c34e2b31dcb4","Type":"ContainerStarted","Data":"8ff91dc72858505e6487b6773de4983af46f08af5ef869ed7909237a39ad457c"} Jan 20 04:44:12 crc kubenswrapper[4898]: I0120 04:44:12.774165 4898 generic.go:334] "Generic (PLEG): container finished" podID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" containerID="71ab2a267c5a6610423d4f13dab5aea99c16b44bd740395a75fdc596450fc94f" exitCode=0 Jan 20 04:44:12 crc kubenswrapper[4898]: I0120 04:44:12.774274 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzs45" event={"ID":"c9886ee0-b599-4b06-b919-c34e2b31dcb4","Type":"ContainerDied","Data":"71ab2a267c5a6610423d4f13dab5aea99c16b44bd740395a75fdc596450fc94f"} Jan 20 04:44:13 crc kubenswrapper[4898]: I0120 04:44:13.733410 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:44:13 crc kubenswrapper[4898]: E0120 04:44:13.734209 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:44:13 crc kubenswrapper[4898]: I0120 04:44:13.793123 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzs45" event={"ID":"c9886ee0-b599-4b06-b919-c34e2b31dcb4","Type":"ContainerStarted","Data":"fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62"} Jan 20 04:44:19 crc kubenswrapper[4898]: I0120 04:44:19.675493 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:19 crc kubenswrapper[4898]: I0120 04:44:19.676462 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:19 crc kubenswrapper[4898]: I0120 04:44:19.759527 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:19 crc kubenswrapper[4898]: I0120 04:44:19.798984 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gzs45" podStartSLOduration=8.288993619 podStartE2EDuration="10.798958702s" podCreationTimestamp="2026-01-20 04:44:09 +0000 UTC" firstStartedPulling="2026-01-20 04:44:10.750107665 +0000 UTC m=+3297.349895524" lastFinishedPulling="2026-01-20 04:44:13.260072748 +0000 UTC m=+3299.859860607" observedRunningTime="2026-01-20 04:44:13.818014741 +0000 UTC m=+3300.417802600" watchObservedRunningTime="2026-01-20 04:44:19.798958702 +0000 UTC m=+3306.398746591" Jan 20 04:44:19 crc kubenswrapper[4898]: I0120 04:44:19.936794 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:20 crc kubenswrapper[4898]: I0120 04:44:20.011583 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzs45"] Jan 20 04:44:21 crc kubenswrapper[4898]: I0120 04:44:21.873262 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gzs45" podUID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" containerName="registry-server" containerID="cri-o://fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62" gracePeriod=2 Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.389116 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.566251 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84ngj\" (UniqueName: \"kubernetes.io/projected/c9886ee0-b599-4b06-b919-c34e2b31dcb4-kube-api-access-84ngj\") pod \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.567015 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-utilities\") pod \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.567143 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-catalog-content\") pod \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\" (UID: \"c9886ee0-b599-4b06-b919-c34e2b31dcb4\") " Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.568347 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-utilities" (OuterVolumeSpecName: "utilities") pod "c9886ee0-b599-4b06-b919-c34e2b31dcb4" (UID: "c9886ee0-b599-4b06-b919-c34e2b31dcb4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.576857 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9886ee0-b599-4b06-b919-c34e2b31dcb4-kube-api-access-84ngj" (OuterVolumeSpecName: "kube-api-access-84ngj") pod "c9886ee0-b599-4b06-b919-c34e2b31dcb4" (UID: "c9886ee0-b599-4b06-b919-c34e2b31dcb4"). InnerVolumeSpecName "kube-api-access-84ngj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.597568 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9886ee0-b599-4b06-b919-c34e2b31dcb4" (UID: "c9886ee0-b599-4b06-b919-c34e2b31dcb4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.670088 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.670140 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9886ee0-b599-4b06-b919-c34e2b31dcb4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.670167 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84ngj\" (UniqueName: \"kubernetes.io/projected/c9886ee0-b599-4b06-b919-c34e2b31dcb4-kube-api-access-84ngj\") on node \"crc\" DevicePath \"\"" Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.885484 4898 generic.go:334] "Generic (PLEG): container finished" podID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" containerID="fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62" exitCode=0 Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.885542 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzs45" event={"ID":"c9886ee0-b599-4b06-b919-c34e2b31dcb4","Type":"ContainerDied","Data":"fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62"} Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.885598 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzs45" event={"ID":"c9886ee0-b599-4b06-b919-c34e2b31dcb4","Type":"ContainerDied","Data":"8ff91dc72858505e6487b6773de4983af46f08af5ef869ed7909237a39ad457c"} Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.885596 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzs45" Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.885708 4898 scope.go:117] "RemoveContainer" containerID="fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62" Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.920170 4898 scope.go:117] "RemoveContainer" containerID="71ab2a267c5a6610423d4f13dab5aea99c16b44bd740395a75fdc596450fc94f" Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.928624 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzs45"] Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.936622 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzs45"] Jan 20 04:44:22 crc kubenswrapper[4898]: I0120 04:44:22.952288 4898 scope.go:117] "RemoveContainer" containerID="90196fa36e38b44600942adac3a2a59f245eedf5002c8b3579eda6424c5331b4" Jan 20 04:44:23 crc kubenswrapper[4898]: I0120 04:44:23.001552 4898 scope.go:117] "RemoveContainer" containerID="fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62" Jan 20 04:44:23 crc kubenswrapper[4898]: E0120 04:44:23.001990 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62\": container with ID starting with fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62 not found: ID does not exist" containerID="fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62" Jan 20 04:44:23 crc kubenswrapper[4898]: I0120 04:44:23.002037 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62"} err="failed to get container status \"fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62\": rpc error: code = NotFound desc = could not find container \"fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62\": container with ID starting with fc11f29e6983aabe54c3ada29d8d60527e274f5df232943b39b993c86fd3ac62 not found: ID does not exist" Jan 20 04:44:23 crc kubenswrapper[4898]: I0120 04:44:23.002063 4898 scope.go:117] "RemoveContainer" containerID="71ab2a267c5a6610423d4f13dab5aea99c16b44bd740395a75fdc596450fc94f" Jan 20 04:44:23 crc kubenswrapper[4898]: E0120 04:44:23.002504 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ab2a267c5a6610423d4f13dab5aea99c16b44bd740395a75fdc596450fc94f\": container with ID starting with 71ab2a267c5a6610423d4f13dab5aea99c16b44bd740395a75fdc596450fc94f not found: ID does not exist" containerID="71ab2a267c5a6610423d4f13dab5aea99c16b44bd740395a75fdc596450fc94f" Jan 20 04:44:23 crc kubenswrapper[4898]: I0120 04:44:23.002543 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ab2a267c5a6610423d4f13dab5aea99c16b44bd740395a75fdc596450fc94f"} err="failed to get container status \"71ab2a267c5a6610423d4f13dab5aea99c16b44bd740395a75fdc596450fc94f\": rpc error: code = NotFound desc = could not find container \"71ab2a267c5a6610423d4f13dab5aea99c16b44bd740395a75fdc596450fc94f\": container with ID starting with 71ab2a267c5a6610423d4f13dab5aea99c16b44bd740395a75fdc596450fc94f not found: ID does not exist" Jan 20 04:44:23 crc kubenswrapper[4898]: I0120 04:44:23.002563 4898 scope.go:117] "RemoveContainer" containerID="90196fa36e38b44600942adac3a2a59f245eedf5002c8b3579eda6424c5331b4" Jan 20 04:44:23 crc kubenswrapper[4898]: E0120 04:44:23.002967 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90196fa36e38b44600942adac3a2a59f245eedf5002c8b3579eda6424c5331b4\": container with ID starting with 90196fa36e38b44600942adac3a2a59f245eedf5002c8b3579eda6424c5331b4 not found: ID does not exist" containerID="90196fa36e38b44600942adac3a2a59f245eedf5002c8b3579eda6424c5331b4" Jan 20 04:44:23 crc kubenswrapper[4898]: I0120 04:44:23.002996 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90196fa36e38b44600942adac3a2a59f245eedf5002c8b3579eda6424c5331b4"} err="failed to get container status \"90196fa36e38b44600942adac3a2a59f245eedf5002c8b3579eda6424c5331b4\": rpc error: code = NotFound desc = could not find container \"90196fa36e38b44600942adac3a2a59f245eedf5002c8b3579eda6424c5331b4\": container with ID starting with 90196fa36e38b44600942adac3a2a59f245eedf5002c8b3579eda6424c5331b4 not found: ID does not exist" Jan 20 04:44:23 crc kubenswrapper[4898]: I0120 04:44:23.738638 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" path="/var/lib/kubelet/pods/c9886ee0-b599-4b06-b919-c34e2b31dcb4/volumes" Jan 20 04:44:27 crc kubenswrapper[4898]: I0120 04:44:27.721048 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:44:27 crc kubenswrapper[4898]: E0120 04:44:27.722010 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:44:38 crc kubenswrapper[4898]: I0120 04:44:38.721402 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:44:38 crc kubenswrapper[4898]: E0120 04:44:38.722366 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:44:49 crc kubenswrapper[4898]: I0120 04:44:49.722423 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:44:49 crc kubenswrapper[4898]: E0120 04:44:49.723516 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.183830 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f"] Jan 20 04:45:00 crc kubenswrapper[4898]: E0120 04:45:00.190538 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" containerName="extract-content" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.190626 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" containerName="extract-content" Jan 20 04:45:00 crc kubenswrapper[4898]: E0120 04:45:00.190674 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" containerName="registry-server" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.190690 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" containerName="registry-server" Jan 20 04:45:00 crc kubenswrapper[4898]: E0120 04:45:00.190745 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" containerName="extract-utilities" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.190759 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" containerName="extract-utilities" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.213688 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9886ee0-b599-4b06-b919-c34e2b31dcb4" containerName="registry-server" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.214703 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.222039 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.222279 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.231068 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f"] Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.331024 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e19dd1d-d8de-41d7-885b-4f29e5795537-config-volume\") pod \"collect-profiles-29481405-bll4f\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.331201 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8kl7\" (UniqueName: \"kubernetes.io/projected/5e19dd1d-d8de-41d7-885b-4f29e5795537-kube-api-access-s8kl7\") pod \"collect-profiles-29481405-bll4f\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.331257 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e19dd1d-d8de-41d7-885b-4f29e5795537-secret-volume\") pod \"collect-profiles-29481405-bll4f\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.433779 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e19dd1d-d8de-41d7-885b-4f29e5795537-config-volume\") pod \"collect-profiles-29481405-bll4f\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.434044 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8kl7\" (UniqueName: \"kubernetes.io/projected/5e19dd1d-d8de-41d7-885b-4f29e5795537-kube-api-access-s8kl7\") pod \"collect-profiles-29481405-bll4f\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.434129 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e19dd1d-d8de-41d7-885b-4f29e5795537-secret-volume\") pod \"collect-profiles-29481405-bll4f\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.435045 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e19dd1d-d8de-41d7-885b-4f29e5795537-config-volume\") pod \"collect-profiles-29481405-bll4f\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.448787 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e19dd1d-d8de-41d7-885b-4f29e5795537-secret-volume\") pod \"collect-profiles-29481405-bll4f\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.457930 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8kl7\" (UniqueName: \"kubernetes.io/projected/5e19dd1d-d8de-41d7-885b-4f29e5795537-kube-api-access-s8kl7\") pod \"collect-profiles-29481405-bll4f\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:00 crc kubenswrapper[4898]: I0120 04:45:00.546641 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:01 crc kubenswrapper[4898]: I0120 04:45:01.042723 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f"] Jan 20 04:45:01 crc kubenswrapper[4898]: I0120 04:45:01.327167 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" event={"ID":"5e19dd1d-d8de-41d7-885b-4f29e5795537","Type":"ContainerStarted","Data":"99097c653eb5846750b2637192e7de1cefe22cf3fc952232c5f4775b1021535a"} Jan 20 04:45:01 crc kubenswrapper[4898]: I0120 04:45:01.327534 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" event={"ID":"5e19dd1d-d8de-41d7-885b-4f29e5795537","Type":"ContainerStarted","Data":"08275d1feabcf720a801a5188eec86edbf9b8d92f0984d67e086854e00392fb4"} Jan 20 04:45:01 crc kubenswrapper[4898]: I0120 04:45:01.347093 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" podStartSLOduration=1.3470725780000001 podStartE2EDuration="1.347072578s" podCreationTimestamp="2026-01-20 04:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 04:45:01.339757713 +0000 UTC m=+3347.939545602" watchObservedRunningTime="2026-01-20 04:45:01.347072578 +0000 UTC m=+3347.946860437" Jan 20 04:45:02 crc kubenswrapper[4898]: I0120 04:45:02.339515 4898 generic.go:334] "Generic (PLEG): container finished" podID="5e19dd1d-d8de-41d7-885b-4f29e5795537" containerID="99097c653eb5846750b2637192e7de1cefe22cf3fc952232c5f4775b1021535a" exitCode=0 Jan 20 04:45:02 crc kubenswrapper[4898]: I0120 04:45:02.339573 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" event={"ID":"5e19dd1d-d8de-41d7-885b-4f29e5795537","Type":"ContainerDied","Data":"99097c653eb5846750b2637192e7de1cefe22cf3fc952232c5f4775b1021535a"} Jan 20 04:45:03 crc kubenswrapper[4898]: I0120 04:45:03.727820 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:45:03 crc kubenswrapper[4898]: E0120 04:45:03.728423 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:45:03 crc kubenswrapper[4898]: I0120 04:45:03.755149 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:03 crc kubenswrapper[4898]: I0120 04:45:03.898133 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e19dd1d-d8de-41d7-885b-4f29e5795537-secret-volume\") pod \"5e19dd1d-d8de-41d7-885b-4f29e5795537\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " Jan 20 04:45:03 crc kubenswrapper[4898]: I0120 04:45:03.898220 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8kl7\" (UniqueName: \"kubernetes.io/projected/5e19dd1d-d8de-41d7-885b-4f29e5795537-kube-api-access-s8kl7\") pod \"5e19dd1d-d8de-41d7-885b-4f29e5795537\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " Jan 20 04:45:03 crc kubenswrapper[4898]: I0120 04:45:03.898251 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e19dd1d-d8de-41d7-885b-4f29e5795537-config-volume\") pod \"5e19dd1d-d8de-41d7-885b-4f29e5795537\" (UID: \"5e19dd1d-d8de-41d7-885b-4f29e5795537\") " Jan 20 04:45:03 crc kubenswrapper[4898]: I0120 04:45:03.899808 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e19dd1d-d8de-41d7-885b-4f29e5795537-config-volume" (OuterVolumeSpecName: "config-volume") pod "5e19dd1d-d8de-41d7-885b-4f29e5795537" (UID: "5e19dd1d-d8de-41d7-885b-4f29e5795537"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 04:45:03 crc kubenswrapper[4898]: I0120 04:45:03.908678 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e19dd1d-d8de-41d7-885b-4f29e5795537-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5e19dd1d-d8de-41d7-885b-4f29e5795537" (UID: "5e19dd1d-d8de-41d7-885b-4f29e5795537"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 04:45:03 crc kubenswrapper[4898]: I0120 04:45:03.908783 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e19dd1d-d8de-41d7-885b-4f29e5795537-kube-api-access-s8kl7" (OuterVolumeSpecName: "kube-api-access-s8kl7") pod "5e19dd1d-d8de-41d7-885b-4f29e5795537" (UID: "5e19dd1d-d8de-41d7-885b-4f29e5795537"). InnerVolumeSpecName "kube-api-access-s8kl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:45:04 crc kubenswrapper[4898]: I0120 04:45:04.000892 4898 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e19dd1d-d8de-41d7-885b-4f29e5795537-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 04:45:04 crc kubenswrapper[4898]: I0120 04:45:04.000929 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8kl7\" (UniqueName: \"kubernetes.io/projected/5e19dd1d-d8de-41d7-885b-4f29e5795537-kube-api-access-s8kl7\") on node \"crc\" DevicePath \"\"" Jan 20 04:45:04 crc kubenswrapper[4898]: I0120 04:45:04.000938 4898 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e19dd1d-d8de-41d7-885b-4f29e5795537-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 04:45:04 crc kubenswrapper[4898]: I0120 04:45:04.359020 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" event={"ID":"5e19dd1d-d8de-41d7-885b-4f29e5795537","Type":"ContainerDied","Data":"08275d1feabcf720a801a5188eec86edbf9b8d92f0984d67e086854e00392fb4"} Jan 20 04:45:04 crc kubenswrapper[4898]: I0120 04:45:04.359086 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08275d1feabcf720a801a5188eec86edbf9b8d92f0984d67e086854e00392fb4" Jan 20 04:45:04 crc kubenswrapper[4898]: I0120 04:45:04.359106 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481405-bll4f" Jan 20 04:45:04 crc kubenswrapper[4898]: I0120 04:45:04.423836 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs"] Jan 20 04:45:04 crc kubenswrapper[4898]: I0120 04:45:04.431729 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481360-w9zhs"] Jan 20 04:45:05 crc kubenswrapper[4898]: I0120 04:45:05.730441 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02cc424a-e563-4b3d-8fa8-f67b29c67d39" path="/var/lib/kubelet/pods/02cc424a-e563-4b3d-8fa8-f67b29c67d39/volumes" Jan 20 04:45:16 crc kubenswrapper[4898]: I0120 04:45:16.721565 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:45:16 crc kubenswrapper[4898]: E0120 04:45:16.722244 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:45:18 crc kubenswrapper[4898]: I0120 04:45:18.917123 4898 scope.go:117] "RemoveContainer" containerID="67859f9a87b7cfab253bfa978078536a5fd74144d3ef72065a0b64feacdf9bdf" Jan 20 04:45:29 crc kubenswrapper[4898]: I0120 04:45:29.721545 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:45:29 crc kubenswrapper[4898]: E0120 04:45:29.722400 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.024473 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-68jgd"] Jan 20 04:45:32 crc kubenswrapper[4898]: E0120 04:45:32.025565 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e19dd1d-d8de-41d7-885b-4f29e5795537" containerName="collect-profiles" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.025580 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e19dd1d-d8de-41d7-885b-4f29e5795537" containerName="collect-profiles" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.025793 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e19dd1d-d8de-41d7-885b-4f29e5795537" containerName="collect-profiles" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.027097 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.049385 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68jgd"] Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.050946 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-utilities\") pod \"redhat-operators-68jgd\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.050985 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kdsh\" (UniqueName: \"kubernetes.io/projected/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-kube-api-access-8kdsh\") pod \"redhat-operators-68jgd\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.051035 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-catalog-content\") pod \"redhat-operators-68jgd\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.152653 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-catalog-content\") pod \"redhat-operators-68jgd\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.152844 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-utilities\") pod \"redhat-operators-68jgd\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.152897 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kdsh\" (UniqueName: \"kubernetes.io/projected/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-kube-api-access-8kdsh\") pod \"redhat-operators-68jgd\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.153268 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-catalog-content\") pod \"redhat-operators-68jgd\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.153295 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-utilities\") pod \"redhat-operators-68jgd\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.171318 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kdsh\" (UniqueName: \"kubernetes.io/projected/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-kube-api-access-8kdsh\") pod \"redhat-operators-68jgd\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.365359 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:32 crc kubenswrapper[4898]: I0120 04:45:32.874603 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68jgd"] Jan 20 04:45:33 crc kubenswrapper[4898]: I0120 04:45:33.616809 4898 generic.go:334] "Generic (PLEG): container finished" podID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerID="0d03d3f571c0e9d99e49b1da8aab5ca9620150504c98e27f6377774a66e794e7" exitCode=0 Jan 20 04:45:33 crc kubenswrapper[4898]: I0120 04:45:33.616909 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68jgd" event={"ID":"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5","Type":"ContainerDied","Data":"0d03d3f571c0e9d99e49b1da8aab5ca9620150504c98e27f6377774a66e794e7"} Jan 20 04:45:33 crc kubenswrapper[4898]: I0120 04:45:33.617123 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68jgd" event={"ID":"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5","Type":"ContainerStarted","Data":"a4aae2d2d1b0888429b4066d831a69faddbfac4c72ad72bb61bfbf902e369505"} Jan 20 04:45:34 crc kubenswrapper[4898]: I0120 04:45:34.628104 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68jgd" event={"ID":"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5","Type":"ContainerStarted","Data":"a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047"} Jan 20 04:45:36 crc kubenswrapper[4898]: I0120 04:45:36.644379 4898 generic.go:334] "Generic (PLEG): container finished" podID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerID="a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047" exitCode=0 Jan 20 04:45:36 crc kubenswrapper[4898]: I0120 04:45:36.644472 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68jgd" event={"ID":"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5","Type":"ContainerDied","Data":"a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047"} Jan 20 04:45:38 crc kubenswrapper[4898]: I0120 04:45:38.672245 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68jgd" event={"ID":"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5","Type":"ContainerStarted","Data":"967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe"} Jan 20 04:45:38 crc kubenswrapper[4898]: I0120 04:45:38.710329 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-68jgd" podStartSLOduration=3.411318659 podStartE2EDuration="7.710299068s" podCreationTimestamp="2026-01-20 04:45:31 +0000 UTC" firstStartedPulling="2026-01-20 04:45:33.619033932 +0000 UTC m=+3380.218821791" lastFinishedPulling="2026-01-20 04:45:37.918014341 +0000 UTC m=+3384.517802200" observedRunningTime="2026-01-20 04:45:38.698617988 +0000 UTC m=+3385.298405867" watchObservedRunningTime="2026-01-20 04:45:38.710299068 +0000 UTC m=+3385.310086937" Jan 20 04:45:42 crc kubenswrapper[4898]: I0120 04:45:42.366985 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:42 crc kubenswrapper[4898]: I0120 04:45:42.367399 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:43 crc kubenswrapper[4898]: I0120 04:45:43.462272 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-68jgd" podUID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerName="registry-server" probeResult="failure" output=< Jan 20 04:45:43 crc kubenswrapper[4898]: timeout: failed to connect service ":50051" within 1s Jan 20 04:45:43 crc kubenswrapper[4898]: > Jan 20 04:45:43 crc kubenswrapper[4898]: I0120 04:45:43.735915 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:45:43 crc kubenswrapper[4898]: E0120 04:45:43.736461 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:45:52 crc kubenswrapper[4898]: I0120 04:45:52.451797 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:52 crc kubenswrapper[4898]: I0120 04:45:52.519707 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:52 crc kubenswrapper[4898]: I0120 04:45:52.698154 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68jgd"] Jan 20 04:45:53 crc kubenswrapper[4898]: I0120 04:45:53.844963 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-68jgd" podUID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerName="registry-server" containerID="cri-o://967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe" gracePeriod=2 Jan 20 04:45:54 crc kubenswrapper[4898]: E0120 04:45:54.002197 4898 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6ffa4f4_2f9c_4ee5_8503_be0c3fd759b5.slice/crio-967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6ffa4f4_2f9c_4ee5_8503_be0c3fd759b5.slice/crio-conmon-967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe.scope\": RecentStats: unable to find data in memory cache]" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.300184 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.362639 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kdsh\" (UniqueName: \"kubernetes.io/projected/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-kube-api-access-8kdsh\") pod \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.362770 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-utilities\") pod \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.363016 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-catalog-content\") pod \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\" (UID: \"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5\") " Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.366503 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-utilities" (OuterVolumeSpecName: "utilities") pod "e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" (UID: "e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.375017 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-kube-api-access-8kdsh" (OuterVolumeSpecName: "kube-api-access-8kdsh") pod "e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" (UID: "e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5"). InnerVolumeSpecName "kube-api-access-8kdsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.464850 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kdsh\" (UniqueName: \"kubernetes.io/projected/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-kube-api-access-8kdsh\") on node \"crc\" DevicePath \"\"" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.464904 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.497136 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" (UID: "e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.566626 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.722610 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:45:54 crc kubenswrapper[4898]: E0120 04:45:54.723362 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.855242 4898 generic.go:334] "Generic (PLEG): container finished" podID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerID="967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe" exitCode=0 Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.855291 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68jgd" event={"ID":"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5","Type":"ContainerDied","Data":"967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe"} Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.855330 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68jgd" event={"ID":"e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5","Type":"ContainerDied","Data":"a4aae2d2d1b0888429b4066d831a69faddbfac4c72ad72bb61bfbf902e369505"} Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.855353 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68jgd" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.855379 4898 scope.go:117] "RemoveContainer" containerID="967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.878366 4898 scope.go:117] "RemoveContainer" containerID="a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.910293 4898 scope.go:117] "RemoveContainer" containerID="0d03d3f571c0e9d99e49b1da8aab5ca9620150504c98e27f6377774a66e794e7" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.932856 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68jgd"] Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.945557 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-68jgd"] Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.974860 4898 scope.go:117] "RemoveContainer" containerID="967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe" Jan 20 04:45:54 crc kubenswrapper[4898]: E0120 04:45:54.975405 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe\": container with ID starting with 967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe not found: ID does not exist" containerID="967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.975470 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe"} err="failed to get container status \"967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe\": rpc error: code = NotFound desc = could not find container \"967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe\": container with ID starting with 967f3efc2a95487f4db4ea65d8c3452ad736bde2277dd03b2ad9592fad105afe not found: ID does not exist" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.975505 4898 scope.go:117] "RemoveContainer" containerID="a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047" Jan 20 04:45:54 crc kubenswrapper[4898]: E0120 04:45:54.975803 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047\": container with ID starting with a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047 not found: ID does not exist" containerID="a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.975839 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047"} err="failed to get container status \"a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047\": rpc error: code = NotFound desc = could not find container \"a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047\": container with ID starting with a9d1308bb9b5c0f00ab5e8983fc00526e9f9e530ccf2eef075e3dd85cc956047 not found: ID does not exist" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.975864 4898 scope.go:117] "RemoveContainer" containerID="0d03d3f571c0e9d99e49b1da8aab5ca9620150504c98e27f6377774a66e794e7" Jan 20 04:45:54 crc kubenswrapper[4898]: E0120 04:45:54.976263 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d03d3f571c0e9d99e49b1da8aab5ca9620150504c98e27f6377774a66e794e7\": container with ID starting with 0d03d3f571c0e9d99e49b1da8aab5ca9620150504c98e27f6377774a66e794e7 not found: ID does not exist" containerID="0d03d3f571c0e9d99e49b1da8aab5ca9620150504c98e27f6377774a66e794e7" Jan 20 04:45:54 crc kubenswrapper[4898]: I0120 04:45:54.976285 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d03d3f571c0e9d99e49b1da8aab5ca9620150504c98e27f6377774a66e794e7"} err="failed to get container status \"0d03d3f571c0e9d99e49b1da8aab5ca9620150504c98e27f6377774a66e794e7\": rpc error: code = NotFound desc = could not find container \"0d03d3f571c0e9d99e49b1da8aab5ca9620150504c98e27f6377774a66e794e7\": container with ID starting with 0d03d3f571c0e9d99e49b1da8aab5ca9620150504c98e27f6377774a66e794e7 not found: ID does not exist" Jan 20 04:45:55 crc kubenswrapper[4898]: I0120 04:45:55.737358 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" path="/var/lib/kubelet/pods/e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5/volumes" Jan 20 04:46:05 crc kubenswrapper[4898]: I0120 04:46:05.722365 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:46:05 crc kubenswrapper[4898]: E0120 04:46:05.723664 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:46:20 crc kubenswrapper[4898]: I0120 04:46:20.721664 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:46:21 crc kubenswrapper[4898]: I0120 04:46:21.128358 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"595cba167ada7139f1593fb48385876703efc309c37b089aa14cefa88d37dacd"} Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.079890 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8sbbl"] Jan 20 04:48:26 crc kubenswrapper[4898]: E0120 04:48:26.101461 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerName="registry-server" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.101524 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerName="registry-server" Jan 20 04:48:26 crc kubenswrapper[4898]: E0120 04:48:26.101610 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerName="extract-utilities" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.101625 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerName="extract-utilities" Jan 20 04:48:26 crc kubenswrapper[4898]: E0120 04:48:26.101649 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerName="extract-content" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.101662 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerName="extract-content" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.102531 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ffa4f4-2f9c-4ee5-8503-be0c3fd759b5" containerName="registry-server" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.107007 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.130014 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8sbbl"] Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.130542 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlgp7\" (UniqueName: \"kubernetes.io/projected/2034d29b-4cc5-4704-8419-e7c6acccc967-kube-api-access-vlgp7\") pod \"community-operators-8sbbl\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.130655 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-catalog-content\") pod \"community-operators-8sbbl\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.130898 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-utilities\") pod \"community-operators-8sbbl\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.232917 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-utilities\") pod \"community-operators-8sbbl\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.232994 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlgp7\" (UniqueName: \"kubernetes.io/projected/2034d29b-4cc5-4704-8419-e7c6acccc967-kube-api-access-vlgp7\") pod \"community-operators-8sbbl\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.233043 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-catalog-content\") pod \"community-operators-8sbbl\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.233559 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-catalog-content\") pod \"community-operators-8sbbl\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.233550 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-utilities\") pod \"community-operators-8sbbl\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.278189 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlgp7\" (UniqueName: \"kubernetes.io/projected/2034d29b-4cc5-4704-8419-e7c6acccc967-kube-api-access-vlgp7\") pod \"community-operators-8sbbl\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.451193 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:26 crc kubenswrapper[4898]: I0120 04:48:26.981825 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8sbbl"] Jan 20 04:48:27 crc kubenswrapper[4898]: I0120 04:48:27.555210 4898 generic.go:334] "Generic (PLEG): container finished" podID="2034d29b-4cc5-4704-8419-e7c6acccc967" containerID="ab11858d063dc665028f89266558e6cf5f5f4016cb5a7e6ccd9e83f429ad4ba1" exitCode=0 Jan 20 04:48:27 crc kubenswrapper[4898]: I0120 04:48:27.555271 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sbbl" event={"ID":"2034d29b-4cc5-4704-8419-e7c6acccc967","Type":"ContainerDied","Data":"ab11858d063dc665028f89266558e6cf5f5f4016cb5a7e6ccd9e83f429ad4ba1"} Jan 20 04:48:27 crc kubenswrapper[4898]: I0120 04:48:27.555593 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sbbl" event={"ID":"2034d29b-4cc5-4704-8419-e7c6acccc967","Type":"ContainerStarted","Data":"193504dd0857d34a1c2da1249119c01025b99a4697fcbd9cfbaf0412cd7a8195"} Jan 20 04:48:27 crc kubenswrapper[4898]: I0120 04:48:27.559088 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 04:48:29 crc kubenswrapper[4898]: I0120 04:48:29.578809 4898 generic.go:334] "Generic (PLEG): container finished" podID="2034d29b-4cc5-4704-8419-e7c6acccc967" containerID="9b42d5bcb1887e5775fd9f0e5ceabd01912b9cd2307447c2354d4efe4c4ece2a" exitCode=0 Jan 20 04:48:29 crc kubenswrapper[4898]: I0120 04:48:29.578860 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sbbl" event={"ID":"2034d29b-4cc5-4704-8419-e7c6acccc967","Type":"ContainerDied","Data":"9b42d5bcb1887e5775fd9f0e5ceabd01912b9cd2307447c2354d4efe4c4ece2a"} Jan 20 04:48:30 crc kubenswrapper[4898]: I0120 04:48:30.590143 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sbbl" event={"ID":"2034d29b-4cc5-4704-8419-e7c6acccc967","Type":"ContainerStarted","Data":"af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70"} Jan 20 04:48:30 crc kubenswrapper[4898]: I0120 04:48:30.615951 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8sbbl" podStartSLOduration=2.146947451 podStartE2EDuration="4.615931968s" podCreationTimestamp="2026-01-20 04:48:26 +0000 UTC" firstStartedPulling="2026-01-20 04:48:27.557352638 +0000 UTC m=+3554.157140497" lastFinishedPulling="2026-01-20 04:48:30.026337135 +0000 UTC m=+3556.626125014" observedRunningTime="2026-01-20 04:48:30.611181691 +0000 UTC m=+3557.210969550" watchObservedRunningTime="2026-01-20 04:48:30.615931968 +0000 UTC m=+3557.215719827" Jan 20 04:48:36 crc kubenswrapper[4898]: I0120 04:48:36.452373 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:36 crc kubenswrapper[4898]: I0120 04:48:36.453119 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:36 crc kubenswrapper[4898]: I0120 04:48:36.526738 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:36 crc kubenswrapper[4898]: I0120 04:48:36.687068 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:36 crc kubenswrapper[4898]: I0120 04:48:36.762911 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8sbbl"] Jan 20 04:48:38 crc kubenswrapper[4898]: I0120 04:48:38.662468 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8sbbl" podUID="2034d29b-4cc5-4704-8419-e7c6acccc967" containerName="registry-server" containerID="cri-o://af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70" gracePeriod=2 Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.196498 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.324911 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlgp7\" (UniqueName: \"kubernetes.io/projected/2034d29b-4cc5-4704-8419-e7c6acccc967-kube-api-access-vlgp7\") pod \"2034d29b-4cc5-4704-8419-e7c6acccc967\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.325249 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-catalog-content\") pod \"2034d29b-4cc5-4704-8419-e7c6acccc967\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.325473 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-utilities\") pod \"2034d29b-4cc5-4704-8419-e7c6acccc967\" (UID: \"2034d29b-4cc5-4704-8419-e7c6acccc967\") " Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.326415 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-utilities" (OuterVolumeSpecName: "utilities") pod "2034d29b-4cc5-4704-8419-e7c6acccc967" (UID: "2034d29b-4cc5-4704-8419-e7c6acccc967"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.332478 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2034d29b-4cc5-4704-8419-e7c6acccc967-kube-api-access-vlgp7" (OuterVolumeSpecName: "kube-api-access-vlgp7") pod "2034d29b-4cc5-4704-8419-e7c6acccc967" (UID: "2034d29b-4cc5-4704-8419-e7c6acccc967"). InnerVolumeSpecName "kube-api-access-vlgp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.372347 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2034d29b-4cc5-4704-8419-e7c6acccc967" (UID: "2034d29b-4cc5-4704-8419-e7c6acccc967"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.427744 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.428531 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlgp7\" (UniqueName: \"kubernetes.io/projected/2034d29b-4cc5-4704-8419-e7c6acccc967-kube-api-access-vlgp7\") on node \"crc\" DevicePath \"\"" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.428631 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2034d29b-4cc5-4704-8419-e7c6acccc967-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.676669 4898 generic.go:334] "Generic (PLEG): container finished" podID="2034d29b-4cc5-4704-8419-e7c6acccc967" containerID="af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70" exitCode=0 Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.676728 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sbbl" event={"ID":"2034d29b-4cc5-4704-8419-e7c6acccc967","Type":"ContainerDied","Data":"af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70"} Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.676768 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sbbl" event={"ID":"2034d29b-4cc5-4704-8419-e7c6acccc967","Type":"ContainerDied","Data":"193504dd0857d34a1c2da1249119c01025b99a4697fcbd9cfbaf0412cd7a8195"} Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.676799 4898 scope.go:117] "RemoveContainer" containerID="af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.676999 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sbbl" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.708735 4898 scope.go:117] "RemoveContainer" containerID="9b42d5bcb1887e5775fd9f0e5ceabd01912b9cd2307447c2354d4efe4c4ece2a" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.743758 4898 scope.go:117] "RemoveContainer" containerID="ab11858d063dc665028f89266558e6cf5f5f4016cb5a7e6ccd9e83f429ad4ba1" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.746200 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8sbbl"] Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.746373 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8sbbl"] Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.795506 4898 scope.go:117] "RemoveContainer" containerID="af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70" Jan 20 04:48:39 crc kubenswrapper[4898]: E0120 04:48:39.796057 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70\": container with ID starting with af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70 not found: ID does not exist" containerID="af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.796113 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70"} err="failed to get container status \"af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70\": rpc error: code = NotFound desc = could not find container \"af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70\": container with ID starting with af838a2d8b039e17b48bf3079fb0571699da9c8132396c3ddaa2ae910ab37f70 not found: ID does not exist" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.796146 4898 scope.go:117] "RemoveContainer" containerID="9b42d5bcb1887e5775fd9f0e5ceabd01912b9cd2307447c2354d4efe4c4ece2a" Jan 20 04:48:39 crc kubenswrapper[4898]: E0120 04:48:39.796808 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b42d5bcb1887e5775fd9f0e5ceabd01912b9cd2307447c2354d4efe4c4ece2a\": container with ID starting with 9b42d5bcb1887e5775fd9f0e5ceabd01912b9cd2307447c2354d4efe4c4ece2a not found: ID does not exist" containerID="9b42d5bcb1887e5775fd9f0e5ceabd01912b9cd2307447c2354d4efe4c4ece2a" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.796998 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b42d5bcb1887e5775fd9f0e5ceabd01912b9cd2307447c2354d4efe4c4ece2a"} err="failed to get container status \"9b42d5bcb1887e5775fd9f0e5ceabd01912b9cd2307447c2354d4efe4c4ece2a\": rpc error: code = NotFound desc = could not find container \"9b42d5bcb1887e5775fd9f0e5ceabd01912b9cd2307447c2354d4efe4c4ece2a\": container with ID starting with 9b42d5bcb1887e5775fd9f0e5ceabd01912b9cd2307447c2354d4efe4c4ece2a not found: ID does not exist" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.797143 4898 scope.go:117] "RemoveContainer" containerID="ab11858d063dc665028f89266558e6cf5f5f4016cb5a7e6ccd9e83f429ad4ba1" Jan 20 04:48:39 crc kubenswrapper[4898]: E0120 04:48:39.797718 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab11858d063dc665028f89266558e6cf5f5f4016cb5a7e6ccd9e83f429ad4ba1\": container with ID starting with ab11858d063dc665028f89266558e6cf5f5f4016cb5a7e6ccd9e83f429ad4ba1 not found: ID does not exist" containerID="ab11858d063dc665028f89266558e6cf5f5f4016cb5a7e6ccd9e83f429ad4ba1" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.797777 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab11858d063dc665028f89266558e6cf5f5f4016cb5a7e6ccd9e83f429ad4ba1"} err="failed to get container status \"ab11858d063dc665028f89266558e6cf5f5f4016cb5a7e6ccd9e83f429ad4ba1\": rpc error: code = NotFound desc = could not find container \"ab11858d063dc665028f89266558e6cf5f5f4016cb5a7e6ccd9e83f429ad4ba1\": container with ID starting with ab11858d063dc665028f89266558e6cf5f5f4016cb5a7e6ccd9e83f429ad4ba1 not found: ID does not exist" Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.975513 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:48:39 crc kubenswrapper[4898]: I0120 04:48:39.975567 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:48:41 crc kubenswrapper[4898]: I0120 04:48:41.735743 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2034d29b-4cc5-4704-8419-e7c6acccc967" path="/var/lib/kubelet/pods/2034d29b-4cc5-4704-8419-e7c6acccc967/volumes" Jan 20 04:49:09 crc kubenswrapper[4898]: I0120 04:49:09.976004 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:49:09 crc kubenswrapper[4898]: I0120 04:49:09.976644 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:49:39 crc kubenswrapper[4898]: I0120 04:49:39.976355 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:49:39 crc kubenswrapper[4898]: I0120 04:49:39.977235 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:49:39 crc kubenswrapper[4898]: I0120 04:49:39.977311 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 04:49:39 crc kubenswrapper[4898]: I0120 04:49:39.978548 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"595cba167ada7139f1593fb48385876703efc309c37b089aa14cefa88d37dacd"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 04:49:39 crc kubenswrapper[4898]: I0120 04:49:39.978660 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://595cba167ada7139f1593fb48385876703efc309c37b089aa14cefa88d37dacd" gracePeriod=600 Jan 20 04:49:41 crc kubenswrapper[4898]: I0120 04:49:41.017554 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="595cba167ada7139f1593fb48385876703efc309c37b089aa14cefa88d37dacd" exitCode=0 Jan 20 04:49:41 crc kubenswrapper[4898]: I0120 04:49:41.017622 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"595cba167ada7139f1593fb48385876703efc309c37b089aa14cefa88d37dacd"} Jan 20 04:49:41 crc kubenswrapper[4898]: I0120 04:49:41.018255 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024"} Jan 20 04:49:41 crc kubenswrapper[4898]: I0120 04:49:41.018311 4898 scope.go:117] "RemoveContainer" containerID="b5c7b8a4e145ca55b37378fbd372fcd54ec4ba858adc431393aa22158e118cff" Jan 20 04:52:09 crc kubenswrapper[4898]: I0120 04:52:09.976000 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:52:09 crc kubenswrapper[4898]: I0120 04:52:09.976728 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:52:39 crc kubenswrapper[4898]: I0120 04:52:39.976491 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:52:39 crc kubenswrapper[4898]: I0120 04:52:39.977136 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:53:09 crc kubenswrapper[4898]: I0120 04:53:09.975917 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 04:53:09 crc kubenswrapper[4898]: I0120 04:53:09.976455 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 04:53:09 crc kubenswrapper[4898]: I0120 04:53:09.976509 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 04:53:09 crc kubenswrapper[4898]: I0120 04:53:09.977483 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 04:53:09 crc kubenswrapper[4898]: I0120 04:53:09.977555 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" gracePeriod=600 Jan 20 04:53:10 crc kubenswrapper[4898]: E0120 04:53:10.107203 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:53:10 crc kubenswrapper[4898]: I0120 04:53:10.555464 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" exitCode=0 Jan 20 04:53:10 crc kubenswrapper[4898]: I0120 04:53:10.555477 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024"} Jan 20 04:53:10 crc kubenswrapper[4898]: I0120 04:53:10.555562 4898 scope.go:117] "RemoveContainer" containerID="595cba167ada7139f1593fb48385876703efc309c37b089aa14cefa88d37dacd" Jan 20 04:53:10 crc kubenswrapper[4898]: I0120 04:53:10.556087 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:53:10 crc kubenswrapper[4898]: E0120 04:53:10.556380 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:53:23 crc kubenswrapper[4898]: I0120 04:53:23.731898 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:53:23 crc kubenswrapper[4898]: E0120 04:53:23.733252 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:53:34 crc kubenswrapper[4898]: I0120 04:53:34.721193 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:53:34 crc kubenswrapper[4898]: E0120 04:53:34.721885 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:53:48 crc kubenswrapper[4898]: I0120 04:53:48.721260 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:53:48 crc kubenswrapper[4898]: E0120 04:53:48.722364 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:54:02 crc kubenswrapper[4898]: I0120 04:54:02.724389 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:54:02 crc kubenswrapper[4898]: E0120 04:54:02.725374 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:54:17 crc kubenswrapper[4898]: I0120 04:54:17.721887 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:54:17 crc kubenswrapper[4898]: E0120 04:54:17.722992 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:54:29 crc kubenswrapper[4898]: I0120 04:54:29.722965 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:54:29 crc kubenswrapper[4898]: E0120 04:54:29.723816 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:54:40 crc kubenswrapper[4898]: I0120 04:54:40.725588 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:54:40 crc kubenswrapper[4898]: E0120 04:54:40.726362 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:54:54 crc kubenswrapper[4898]: I0120 04:54:54.722079 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:54:54 crc kubenswrapper[4898]: E0120 04:54:54.723942 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.015172 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z7sn2"] Jan 20 04:55:07 crc kubenswrapper[4898]: E0120 04:55:07.016161 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2034d29b-4cc5-4704-8419-e7c6acccc967" containerName="extract-utilities" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.016175 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2034d29b-4cc5-4704-8419-e7c6acccc967" containerName="extract-utilities" Jan 20 04:55:07 crc kubenswrapper[4898]: E0120 04:55:07.016204 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2034d29b-4cc5-4704-8419-e7c6acccc967" containerName="extract-content" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.016210 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2034d29b-4cc5-4704-8419-e7c6acccc967" containerName="extract-content" Jan 20 04:55:07 crc kubenswrapper[4898]: E0120 04:55:07.016230 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2034d29b-4cc5-4704-8419-e7c6acccc967" containerName="registry-server" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.016237 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="2034d29b-4cc5-4704-8419-e7c6acccc967" containerName="registry-server" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.016408 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="2034d29b-4cc5-4704-8419-e7c6acccc967" containerName="registry-server" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.017648 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.028661 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7sn2"] Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.073663 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f4sr\" (UniqueName: \"kubernetes.io/projected/fee5adb4-4e69-4846-b6f8-9751f61d99ff-kube-api-access-4f4sr\") pod \"redhat-marketplace-z7sn2\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.073782 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-utilities\") pod \"redhat-marketplace-z7sn2\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.073850 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-catalog-content\") pod \"redhat-marketplace-z7sn2\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.175827 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-catalog-content\") pod \"redhat-marketplace-z7sn2\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.175960 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f4sr\" (UniqueName: \"kubernetes.io/projected/fee5adb4-4e69-4846-b6f8-9751f61d99ff-kube-api-access-4f4sr\") pod \"redhat-marketplace-z7sn2\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.176035 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-utilities\") pod \"redhat-marketplace-z7sn2\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.176730 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-utilities\") pod \"redhat-marketplace-z7sn2\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.176717 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-catalog-content\") pod \"redhat-marketplace-z7sn2\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.199176 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f4sr\" (UniqueName: \"kubernetes.io/projected/fee5adb4-4e69-4846-b6f8-9751f61d99ff-kube-api-access-4f4sr\") pod \"redhat-marketplace-z7sn2\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.337843 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.721660 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:55:07 crc kubenswrapper[4898]: E0120 04:55:07.722149 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:55:07 crc kubenswrapper[4898]: I0120 04:55:07.833378 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7sn2"] Jan 20 04:55:08 crc kubenswrapper[4898]: I0120 04:55:08.076868 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7sn2" event={"ID":"fee5adb4-4e69-4846-b6f8-9751f61d99ff","Type":"ContainerStarted","Data":"05aa29e809ed522c4b29bcfcf23115b462a2055cd6e5f45e4e86bb053bd31f7e"} Jan 20 04:55:08 crc kubenswrapper[4898]: I0120 04:55:08.077375 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7sn2" event={"ID":"fee5adb4-4e69-4846-b6f8-9751f61d99ff","Type":"ContainerStarted","Data":"02f8536400af5b3eebb879885dbf6b4b324966a2d843d43883daf45df569355d"} Jan 20 04:55:09 crc kubenswrapper[4898]: I0120 04:55:09.084991 4898 generic.go:334] "Generic (PLEG): container finished" podID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" containerID="05aa29e809ed522c4b29bcfcf23115b462a2055cd6e5f45e4e86bb053bd31f7e" exitCode=0 Jan 20 04:55:09 crc kubenswrapper[4898]: I0120 04:55:09.085051 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7sn2" event={"ID":"fee5adb4-4e69-4846-b6f8-9751f61d99ff","Type":"ContainerDied","Data":"05aa29e809ed522c4b29bcfcf23115b462a2055cd6e5f45e4e86bb053bd31f7e"} Jan 20 04:55:09 crc kubenswrapper[4898]: I0120 04:55:09.089302 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 04:55:11 crc kubenswrapper[4898]: I0120 04:55:11.107647 4898 generic.go:334] "Generic (PLEG): container finished" podID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" containerID="4ea7e8a76926fd3bef80193dd4daed131a885aa932472b66225c49f9ae63386c" exitCode=0 Jan 20 04:55:11 crc kubenswrapper[4898]: I0120 04:55:11.107726 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7sn2" event={"ID":"fee5adb4-4e69-4846-b6f8-9751f61d99ff","Type":"ContainerDied","Data":"4ea7e8a76926fd3bef80193dd4daed131a885aa932472b66225c49f9ae63386c"} Jan 20 04:55:12 crc kubenswrapper[4898]: I0120 04:55:12.120522 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7sn2" event={"ID":"fee5adb4-4e69-4846-b6f8-9751f61d99ff","Type":"ContainerStarted","Data":"a85e7d51b0720309e3629156ff57369042f35bb9ce745154598ec6f60b6f7906"} Jan 20 04:55:12 crc kubenswrapper[4898]: I0120 04:55:12.165084 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z7sn2" podStartSLOduration=3.6254489789999997 podStartE2EDuration="6.165036662s" podCreationTimestamp="2026-01-20 04:55:06 +0000 UTC" firstStartedPulling="2026-01-20 04:55:09.089013018 +0000 UTC m=+3955.688800897" lastFinishedPulling="2026-01-20 04:55:11.628600701 +0000 UTC m=+3958.228388580" observedRunningTime="2026-01-20 04:55:12.144964262 +0000 UTC m=+3958.744752161" watchObservedRunningTime="2026-01-20 04:55:12.165036662 +0000 UTC m=+3958.764824561" Jan 20 04:55:17 crc kubenswrapper[4898]: I0120 04:55:17.338854 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:17 crc kubenswrapper[4898]: I0120 04:55:17.339920 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:17 crc kubenswrapper[4898]: I0120 04:55:17.408319 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:18 crc kubenswrapper[4898]: I0120 04:55:18.750164 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:18 crc kubenswrapper[4898]: I0120 04:55:18.805290 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7sn2"] Jan 20 04:55:20 crc kubenswrapper[4898]: I0120 04:55:20.205993 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z7sn2" podUID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" containerName="registry-server" containerID="cri-o://a85e7d51b0720309e3629156ff57369042f35bb9ce745154598ec6f60b6f7906" gracePeriod=2 Jan 20 04:55:20 crc kubenswrapper[4898]: I0120 04:55:20.722047 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:55:20 crc kubenswrapper[4898]: E0120 04:55:20.722815 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.217234 4898 generic.go:334] "Generic (PLEG): container finished" podID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" containerID="a85e7d51b0720309e3629156ff57369042f35bb9ce745154598ec6f60b6f7906" exitCode=0 Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.217272 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7sn2" event={"ID":"fee5adb4-4e69-4846-b6f8-9751f61d99ff","Type":"ContainerDied","Data":"a85e7d51b0720309e3629156ff57369042f35bb9ce745154598ec6f60b6f7906"} Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.700547 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.809474 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-utilities\") pod \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.809596 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f4sr\" (UniqueName: \"kubernetes.io/projected/fee5adb4-4e69-4846-b6f8-9751f61d99ff-kube-api-access-4f4sr\") pod \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.809842 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-catalog-content\") pod \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\" (UID: \"fee5adb4-4e69-4846-b6f8-9751f61d99ff\") " Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.810840 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-utilities" (OuterVolumeSpecName: "utilities") pod "fee5adb4-4e69-4846-b6f8-9751f61d99ff" (UID: "fee5adb4-4e69-4846-b6f8-9751f61d99ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.826765 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee5adb4-4e69-4846-b6f8-9751f61d99ff-kube-api-access-4f4sr" (OuterVolumeSpecName: "kube-api-access-4f4sr") pod "fee5adb4-4e69-4846-b6f8-9751f61d99ff" (UID: "fee5adb4-4e69-4846-b6f8-9751f61d99ff"). InnerVolumeSpecName "kube-api-access-4f4sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.831237 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fee5adb4-4e69-4846-b6f8-9751f61d99ff" (UID: "fee5adb4-4e69-4846-b6f8-9751f61d99ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.911856 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.911886 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee5adb4-4e69-4846-b6f8-9751f61d99ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:55:21 crc kubenswrapper[4898]: I0120 04:55:21.911897 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f4sr\" (UniqueName: \"kubernetes.io/projected/fee5adb4-4e69-4846-b6f8-9751f61d99ff-kube-api-access-4f4sr\") on node \"crc\" DevicePath \"\"" Jan 20 04:55:22 crc kubenswrapper[4898]: I0120 04:55:22.226626 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7sn2" event={"ID":"fee5adb4-4e69-4846-b6f8-9751f61d99ff","Type":"ContainerDied","Data":"02f8536400af5b3eebb879885dbf6b4b324966a2d843d43883daf45df569355d"} Jan 20 04:55:22 crc kubenswrapper[4898]: I0120 04:55:22.226673 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7sn2" Jan 20 04:55:22 crc kubenswrapper[4898]: I0120 04:55:22.226682 4898 scope.go:117] "RemoveContainer" containerID="a85e7d51b0720309e3629156ff57369042f35bb9ce745154598ec6f60b6f7906" Jan 20 04:55:22 crc kubenswrapper[4898]: I0120 04:55:22.273131 4898 scope.go:117] "RemoveContainer" containerID="4ea7e8a76926fd3bef80193dd4daed131a885aa932472b66225c49f9ae63386c" Jan 20 04:55:22 crc kubenswrapper[4898]: I0120 04:55:22.283495 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7sn2"] Jan 20 04:55:22 crc kubenswrapper[4898]: I0120 04:55:22.303202 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7sn2"] Jan 20 04:55:22 crc kubenswrapper[4898]: I0120 04:55:22.307549 4898 scope.go:117] "RemoveContainer" containerID="05aa29e809ed522c4b29bcfcf23115b462a2055cd6e5f45e4e86bb053bd31f7e" Jan 20 04:55:23 crc kubenswrapper[4898]: I0120 04:55:23.748090 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" path="/var/lib/kubelet/pods/fee5adb4-4e69-4846-b6f8-9751f61d99ff/volumes" Jan 20 04:55:35 crc kubenswrapper[4898]: I0120 04:55:35.721491 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:55:35 crc kubenswrapper[4898]: E0120 04:55:35.722355 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:55:47 crc kubenswrapper[4898]: I0120 04:55:47.721592 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:55:47 crc kubenswrapper[4898]: E0120 04:55:47.722582 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:55:56 crc kubenswrapper[4898]: I0120 04:55:56.837385 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mgpvk"] Jan 20 04:55:56 crc kubenswrapper[4898]: E0120 04:55:56.839550 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" containerName="extract-utilities" Jan 20 04:55:56 crc kubenswrapper[4898]: I0120 04:55:56.839752 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" containerName="extract-utilities" Jan 20 04:55:56 crc kubenswrapper[4898]: E0120 04:55:56.839856 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" containerName="extract-content" Jan 20 04:55:56 crc kubenswrapper[4898]: I0120 04:55:56.839931 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" containerName="extract-content" Jan 20 04:55:56 crc kubenswrapper[4898]: E0120 04:55:56.840019 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" containerName="registry-server" Jan 20 04:55:56 crc kubenswrapper[4898]: I0120 04:55:56.840102 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" containerName="registry-server" Jan 20 04:55:56 crc kubenswrapper[4898]: I0120 04:55:56.840407 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee5adb4-4e69-4846-b6f8-9751f61d99ff" containerName="registry-server" Jan 20 04:55:56 crc kubenswrapper[4898]: I0120 04:55:56.842169 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:55:56 crc kubenswrapper[4898]: I0120 04:55:56.864422 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgpvk"] Jan 20 04:55:56 crc kubenswrapper[4898]: I0120 04:55:56.957194 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-utilities\") pod \"certified-operators-mgpvk\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:55:56 crc kubenswrapper[4898]: I0120 04:55:56.957476 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-catalog-content\") pod \"certified-operators-mgpvk\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:55:56 crc kubenswrapper[4898]: I0120 04:55:56.957591 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2fjx\" (UniqueName: \"kubernetes.io/projected/be744aea-0dbb-459e-b826-cdc290d6ab24-kube-api-access-w2fjx\") pod \"certified-operators-mgpvk\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:55:57 crc kubenswrapper[4898]: I0120 04:55:57.059528 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-utilities\") pod \"certified-operators-mgpvk\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:55:57 crc kubenswrapper[4898]: I0120 04:55:57.059779 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-catalog-content\") pod \"certified-operators-mgpvk\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:55:57 crc kubenswrapper[4898]: I0120 04:55:57.059878 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2fjx\" (UniqueName: \"kubernetes.io/projected/be744aea-0dbb-459e-b826-cdc290d6ab24-kube-api-access-w2fjx\") pod \"certified-operators-mgpvk\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:55:57 crc kubenswrapper[4898]: I0120 04:55:57.060098 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-utilities\") pod \"certified-operators-mgpvk\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:55:57 crc kubenswrapper[4898]: I0120 04:55:57.060163 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-catalog-content\") pod \"certified-operators-mgpvk\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:55:57 crc kubenswrapper[4898]: I0120 04:55:57.078323 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2fjx\" (UniqueName: \"kubernetes.io/projected/be744aea-0dbb-459e-b826-cdc290d6ab24-kube-api-access-w2fjx\") pod \"certified-operators-mgpvk\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:55:57 crc kubenswrapper[4898]: I0120 04:55:57.160417 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:55:57 crc kubenswrapper[4898]: I0120 04:55:57.730779 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgpvk"] Jan 20 04:55:58 crc kubenswrapper[4898]: I0120 04:55:58.616242 4898 generic.go:334] "Generic (PLEG): container finished" podID="be744aea-0dbb-459e-b826-cdc290d6ab24" containerID="5ada5253a399f1872ece930ecfe27c02c8806e756f3e465437b2ffde50c1e8e3" exitCode=0 Jan 20 04:55:58 crc kubenswrapper[4898]: I0120 04:55:58.616344 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgpvk" event={"ID":"be744aea-0dbb-459e-b826-cdc290d6ab24","Type":"ContainerDied","Data":"5ada5253a399f1872ece930ecfe27c02c8806e756f3e465437b2ffde50c1e8e3"} Jan 20 04:55:58 crc kubenswrapper[4898]: I0120 04:55:58.616659 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgpvk" event={"ID":"be744aea-0dbb-459e-b826-cdc290d6ab24","Type":"ContainerStarted","Data":"f3405935d731aef537b31c4b2701c435dbdcc44a1cdc1b276397159b12b0d973"} Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.440578 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d5skw"] Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.444086 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.462510 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d5skw"] Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.616037 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-catalog-content\") pod \"redhat-operators-d5skw\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.616137 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-utilities\") pod \"redhat-operators-d5skw\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.616220 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6kdw\" (UniqueName: \"kubernetes.io/projected/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-kube-api-access-j6kdw\") pod \"redhat-operators-d5skw\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.625545 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgpvk" event={"ID":"be744aea-0dbb-459e-b826-cdc290d6ab24","Type":"ContainerStarted","Data":"aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d"} Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.718000 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6kdw\" (UniqueName: \"kubernetes.io/projected/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-kube-api-access-j6kdw\") pod \"redhat-operators-d5skw\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.718352 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-catalog-content\") pod \"redhat-operators-d5skw\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.718446 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-utilities\") pod \"redhat-operators-d5skw\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.718953 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-utilities\") pod \"redhat-operators-d5skw\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.718996 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-catalog-content\") pod \"redhat-operators-d5skw\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.742542 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6kdw\" (UniqueName: \"kubernetes.io/projected/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-kube-api-access-j6kdw\") pod \"redhat-operators-d5skw\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:55:59 crc kubenswrapper[4898]: I0120 04:55:59.778067 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:56:00 crc kubenswrapper[4898]: I0120 04:56:00.207358 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d5skw"] Jan 20 04:56:00 crc kubenswrapper[4898]: W0120 04:56:00.215620 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc60b0384_3cd5_48c9_9bec_8bbecc4eb62e.slice/crio-5ec938a8e1b94242fb19160d8ab3507801e47e055a1267ec9b0ce3ad22c91370 WatchSource:0}: Error finding container 5ec938a8e1b94242fb19160d8ab3507801e47e055a1267ec9b0ce3ad22c91370: Status 404 returned error can't find the container with id 5ec938a8e1b94242fb19160d8ab3507801e47e055a1267ec9b0ce3ad22c91370 Jan 20 04:56:00 crc kubenswrapper[4898]: I0120 04:56:00.635090 4898 generic.go:334] "Generic (PLEG): container finished" podID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerID="fbcd1f5d555d9295b80041e4d9c37e51b0fa75fb051178e142480a0f090e45c0" exitCode=0 Jan 20 04:56:00 crc kubenswrapper[4898]: I0120 04:56:00.635148 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5skw" event={"ID":"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e","Type":"ContainerDied","Data":"fbcd1f5d555d9295b80041e4d9c37e51b0fa75fb051178e142480a0f090e45c0"} Jan 20 04:56:00 crc kubenswrapper[4898]: I0120 04:56:00.635172 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5skw" event={"ID":"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e","Type":"ContainerStarted","Data":"5ec938a8e1b94242fb19160d8ab3507801e47e055a1267ec9b0ce3ad22c91370"} Jan 20 04:56:00 crc kubenswrapper[4898]: I0120 04:56:00.638026 4898 generic.go:334] "Generic (PLEG): container finished" podID="be744aea-0dbb-459e-b826-cdc290d6ab24" containerID="aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d" exitCode=0 Jan 20 04:56:00 crc kubenswrapper[4898]: I0120 04:56:00.638053 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgpvk" event={"ID":"be744aea-0dbb-459e-b826-cdc290d6ab24","Type":"ContainerDied","Data":"aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d"} Jan 20 04:56:01 crc kubenswrapper[4898]: I0120 04:56:01.647612 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgpvk" event={"ID":"be744aea-0dbb-459e-b826-cdc290d6ab24","Type":"ContainerStarted","Data":"77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19"} Jan 20 04:56:01 crc kubenswrapper[4898]: I0120 04:56:01.649517 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5skw" event={"ID":"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e","Type":"ContainerStarted","Data":"5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590"} Jan 20 04:56:01 crc kubenswrapper[4898]: I0120 04:56:01.672287 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mgpvk" podStartSLOduration=3.156200022 podStartE2EDuration="5.672271218s" podCreationTimestamp="2026-01-20 04:55:56 +0000 UTC" firstStartedPulling="2026-01-20 04:55:58.61942044 +0000 UTC m=+4005.219208339" lastFinishedPulling="2026-01-20 04:56:01.135491676 +0000 UTC m=+4007.735279535" observedRunningTime="2026-01-20 04:56:01.664118566 +0000 UTC m=+4008.263906425" watchObservedRunningTime="2026-01-20 04:56:01.672271218 +0000 UTC m=+4008.272059077" Jan 20 04:56:01 crc kubenswrapper[4898]: I0120 04:56:01.721076 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:56:01 crc kubenswrapper[4898]: E0120 04:56:01.721451 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:56:03 crc kubenswrapper[4898]: I0120 04:56:03.669076 4898 generic.go:334] "Generic (PLEG): container finished" podID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerID="5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590" exitCode=0 Jan 20 04:56:03 crc kubenswrapper[4898]: I0120 04:56:03.669121 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5skw" event={"ID":"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e","Type":"ContainerDied","Data":"5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590"} Jan 20 04:56:05 crc kubenswrapper[4898]: I0120 04:56:05.694213 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5skw" event={"ID":"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e","Type":"ContainerStarted","Data":"11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636"} Jan 20 04:56:05 crc kubenswrapper[4898]: I0120 04:56:05.724579 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d5skw" podStartSLOduration=2.859979887 podStartE2EDuration="6.724560751s" podCreationTimestamp="2026-01-20 04:55:59 +0000 UTC" firstStartedPulling="2026-01-20 04:56:00.637031748 +0000 UTC m=+4007.236819617" lastFinishedPulling="2026-01-20 04:56:04.501612622 +0000 UTC m=+4011.101400481" observedRunningTime="2026-01-20 04:56:05.717561744 +0000 UTC m=+4012.317349623" watchObservedRunningTime="2026-01-20 04:56:05.724560751 +0000 UTC m=+4012.324348610" Jan 20 04:56:07 crc kubenswrapper[4898]: I0120 04:56:07.161393 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:56:07 crc kubenswrapper[4898]: I0120 04:56:07.162688 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:56:07 crc kubenswrapper[4898]: I0120 04:56:07.262274 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:56:07 crc kubenswrapper[4898]: I0120 04:56:07.759484 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:56:08 crc kubenswrapper[4898]: I0120 04:56:08.432555 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mgpvk"] Jan 20 04:56:09 crc kubenswrapper[4898]: I0120 04:56:09.726980 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mgpvk" podUID="be744aea-0dbb-459e-b826-cdc290d6ab24" containerName="registry-server" containerID="cri-o://77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19" gracePeriod=2 Jan 20 04:56:09 crc kubenswrapper[4898]: I0120 04:56:09.778621 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:56:09 crc kubenswrapper[4898]: I0120 04:56:09.778673 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.171728 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.345369 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2fjx\" (UniqueName: \"kubernetes.io/projected/be744aea-0dbb-459e-b826-cdc290d6ab24-kube-api-access-w2fjx\") pod \"be744aea-0dbb-459e-b826-cdc290d6ab24\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.345514 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-catalog-content\") pod \"be744aea-0dbb-459e-b826-cdc290d6ab24\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.345621 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-utilities\") pod \"be744aea-0dbb-459e-b826-cdc290d6ab24\" (UID: \"be744aea-0dbb-459e-b826-cdc290d6ab24\") " Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.346686 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-utilities" (OuterVolumeSpecName: "utilities") pod "be744aea-0dbb-459e-b826-cdc290d6ab24" (UID: "be744aea-0dbb-459e-b826-cdc290d6ab24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.352509 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be744aea-0dbb-459e-b826-cdc290d6ab24-kube-api-access-w2fjx" (OuterVolumeSpecName: "kube-api-access-w2fjx") pod "be744aea-0dbb-459e-b826-cdc290d6ab24" (UID: "be744aea-0dbb-459e-b826-cdc290d6ab24"). InnerVolumeSpecName "kube-api-access-w2fjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.389101 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be744aea-0dbb-459e-b826-cdc290d6ab24" (UID: "be744aea-0dbb-459e-b826-cdc290d6ab24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.447323 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.447352 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be744aea-0dbb-459e-b826-cdc290d6ab24-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.447362 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2fjx\" (UniqueName: \"kubernetes.io/projected/be744aea-0dbb-459e-b826-cdc290d6ab24-kube-api-access-w2fjx\") on node \"crc\" DevicePath \"\"" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.736900 4898 generic.go:334] "Generic (PLEG): container finished" podID="be744aea-0dbb-459e-b826-cdc290d6ab24" containerID="77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19" exitCode=0 Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.737315 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgpvk" event={"ID":"be744aea-0dbb-459e-b826-cdc290d6ab24","Type":"ContainerDied","Data":"77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19"} Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.737348 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgpvk" event={"ID":"be744aea-0dbb-459e-b826-cdc290d6ab24","Type":"ContainerDied","Data":"f3405935d731aef537b31c4b2701c435dbdcc44a1cdc1b276397159b12b0d973"} Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.737369 4898 scope.go:117] "RemoveContainer" containerID="77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.737554 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgpvk" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.768777 4898 scope.go:117] "RemoveContainer" containerID="aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.780180 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mgpvk"] Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.791274 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mgpvk"] Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.796846 4898 scope.go:117] "RemoveContainer" containerID="5ada5253a399f1872ece930ecfe27c02c8806e756f3e465437b2ffde50c1e8e3" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.840676 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d5skw" podUID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerName="registry-server" probeResult="failure" output=< Jan 20 04:56:10 crc kubenswrapper[4898]: timeout: failed to connect service ":50051" within 1s Jan 20 04:56:10 crc kubenswrapper[4898]: > Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.849280 4898 scope.go:117] "RemoveContainer" containerID="77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19" Jan 20 04:56:10 crc kubenswrapper[4898]: E0120 04:56:10.849798 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19\": container with ID starting with 77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19 not found: ID does not exist" containerID="77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.849826 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19"} err="failed to get container status \"77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19\": rpc error: code = NotFound desc = could not find container \"77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19\": container with ID starting with 77589583f0b89eac1d04cb28dc13be8971ba41026fffc499b2c97711136cff19 not found: ID does not exist" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.849845 4898 scope.go:117] "RemoveContainer" containerID="aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d" Jan 20 04:56:10 crc kubenswrapper[4898]: E0120 04:56:10.850265 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d\": container with ID starting with aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d not found: ID does not exist" containerID="aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.850285 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d"} err="failed to get container status \"aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d\": rpc error: code = NotFound desc = could not find container \"aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d\": container with ID starting with aa6f7d06076f3ab15f4fa12297ffacee9266cefaae0f3a159862533b2363582d not found: ID does not exist" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.850296 4898 scope.go:117] "RemoveContainer" containerID="5ada5253a399f1872ece930ecfe27c02c8806e756f3e465437b2ffde50c1e8e3" Jan 20 04:56:10 crc kubenswrapper[4898]: E0120 04:56:10.850726 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ada5253a399f1872ece930ecfe27c02c8806e756f3e465437b2ffde50c1e8e3\": container with ID starting with 5ada5253a399f1872ece930ecfe27c02c8806e756f3e465437b2ffde50c1e8e3 not found: ID does not exist" containerID="5ada5253a399f1872ece930ecfe27c02c8806e756f3e465437b2ffde50c1e8e3" Jan 20 04:56:10 crc kubenswrapper[4898]: I0120 04:56:10.850748 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ada5253a399f1872ece930ecfe27c02c8806e756f3e465437b2ffde50c1e8e3"} err="failed to get container status \"5ada5253a399f1872ece930ecfe27c02c8806e756f3e465437b2ffde50c1e8e3\": rpc error: code = NotFound desc = could not find container \"5ada5253a399f1872ece930ecfe27c02c8806e756f3e465437b2ffde50c1e8e3\": container with ID starting with 5ada5253a399f1872ece930ecfe27c02c8806e756f3e465437b2ffde50c1e8e3 not found: ID does not exist" Jan 20 04:56:11 crc kubenswrapper[4898]: I0120 04:56:11.732085 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be744aea-0dbb-459e-b826-cdc290d6ab24" path="/var/lib/kubelet/pods/be744aea-0dbb-459e-b826-cdc290d6ab24/volumes" Jan 20 04:56:12 crc kubenswrapper[4898]: I0120 04:56:12.722167 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:56:12 crc kubenswrapper[4898]: E0120 04:56:12.723489 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:56:19 crc kubenswrapper[4898]: I0120 04:56:19.847607 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:56:19 crc kubenswrapper[4898]: I0120 04:56:19.905648 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:56:21 crc kubenswrapper[4898]: I0120 04:56:21.032467 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d5skw"] Jan 20 04:56:21 crc kubenswrapper[4898]: I0120 04:56:21.856702 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d5skw" podUID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerName="registry-server" containerID="cri-o://11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636" gracePeriod=2 Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.470819 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.630705 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-utilities\") pod \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.631140 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-catalog-content\") pod \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.631192 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6kdw\" (UniqueName: \"kubernetes.io/projected/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-kube-api-access-j6kdw\") pod \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\" (UID: \"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e\") " Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.634260 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-utilities" (OuterVolumeSpecName: "utilities") pod "c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" (UID: "c60b0384-3cd5-48c9-9bec-8bbecc4eb62e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.639589 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-kube-api-access-j6kdw" (OuterVolumeSpecName: "kube-api-access-j6kdw") pod "c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" (UID: "c60b0384-3cd5-48c9-9bec-8bbecc4eb62e"). InnerVolumeSpecName "kube-api-access-j6kdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.733898 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6kdw\" (UniqueName: \"kubernetes.io/projected/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-kube-api-access-j6kdw\") on node \"crc\" DevicePath \"\"" Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.733929 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.781852 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" (UID: "c60b0384-3cd5-48c9-9bec-8bbecc4eb62e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.840021 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.865301 4898 generic.go:334] "Generic (PLEG): container finished" podID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerID="11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636" exitCode=0 Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.865404 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5skw" Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.865421 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5skw" event={"ID":"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e","Type":"ContainerDied","Data":"11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636"} Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.865805 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5skw" event={"ID":"c60b0384-3cd5-48c9-9bec-8bbecc4eb62e","Type":"ContainerDied","Data":"5ec938a8e1b94242fb19160d8ab3507801e47e055a1267ec9b0ce3ad22c91370"} Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.865837 4898 scope.go:117] "RemoveContainer" containerID="11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636" Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.907582 4898 scope.go:117] "RemoveContainer" containerID="5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590" Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.946486 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d5skw"] Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.960588 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d5skw"] Jan 20 04:56:22 crc kubenswrapper[4898]: I0120 04:56:22.961147 4898 scope.go:117] "RemoveContainer" containerID="fbcd1f5d555d9295b80041e4d9c37e51b0fa75fb051178e142480a0f090e45c0" Jan 20 04:56:23 crc kubenswrapper[4898]: I0120 04:56:23.002511 4898 scope.go:117] "RemoveContainer" containerID="11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636" Jan 20 04:56:23 crc kubenswrapper[4898]: E0120 04:56:23.003137 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636\": container with ID starting with 11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636 not found: ID does not exist" containerID="11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636" Jan 20 04:56:23 crc kubenswrapper[4898]: I0120 04:56:23.003212 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636"} err="failed to get container status \"11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636\": rpc error: code = NotFound desc = could not find container \"11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636\": container with ID starting with 11487205c15ba880b801835858ff9267aa6669fbc8387717ed44e57507783636 not found: ID does not exist" Jan 20 04:56:23 crc kubenswrapper[4898]: I0120 04:56:23.003241 4898 scope.go:117] "RemoveContainer" containerID="5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590" Jan 20 04:56:23 crc kubenswrapper[4898]: E0120 04:56:23.003676 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590\": container with ID starting with 5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590 not found: ID does not exist" containerID="5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590" Jan 20 04:56:23 crc kubenswrapper[4898]: I0120 04:56:23.003710 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590"} err="failed to get container status \"5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590\": rpc error: code = NotFound desc = could not find container \"5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590\": container with ID starting with 5ab89b4fd0918439811f7b1168be603797699c3c147f58cc13e58d79606c5590 not found: ID does not exist" Jan 20 04:56:23 crc kubenswrapper[4898]: I0120 04:56:23.003731 4898 scope.go:117] "RemoveContainer" containerID="fbcd1f5d555d9295b80041e4d9c37e51b0fa75fb051178e142480a0f090e45c0" Jan 20 04:56:23 crc kubenswrapper[4898]: E0120 04:56:23.004047 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbcd1f5d555d9295b80041e4d9c37e51b0fa75fb051178e142480a0f090e45c0\": container with ID starting with fbcd1f5d555d9295b80041e4d9c37e51b0fa75fb051178e142480a0f090e45c0 not found: ID does not exist" containerID="fbcd1f5d555d9295b80041e4d9c37e51b0fa75fb051178e142480a0f090e45c0" Jan 20 04:56:23 crc kubenswrapper[4898]: I0120 04:56:23.004071 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbcd1f5d555d9295b80041e4d9c37e51b0fa75fb051178e142480a0f090e45c0"} err="failed to get container status \"fbcd1f5d555d9295b80041e4d9c37e51b0fa75fb051178e142480a0f090e45c0\": rpc error: code = NotFound desc = could not find container \"fbcd1f5d555d9295b80041e4d9c37e51b0fa75fb051178e142480a0f090e45c0\": container with ID starting with fbcd1f5d555d9295b80041e4d9c37e51b0fa75fb051178e142480a0f090e45c0 not found: ID does not exist" Jan 20 04:56:23 crc kubenswrapper[4898]: I0120 04:56:23.745499 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" path="/var/lib/kubelet/pods/c60b0384-3cd5-48c9-9bec-8bbecc4eb62e/volumes" Jan 20 04:56:27 crc kubenswrapper[4898]: I0120 04:56:27.721634 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:56:27 crc kubenswrapper[4898]: E0120 04:56:27.722671 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:56:39 crc kubenswrapper[4898]: I0120 04:56:39.722220 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:56:39 crc kubenswrapper[4898]: E0120 04:56:39.723603 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:56:52 crc kubenswrapper[4898]: I0120 04:56:52.721395 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:56:52 crc kubenswrapper[4898]: E0120 04:56:52.722760 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:57:04 crc kubenswrapper[4898]: I0120 04:57:04.721967 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:57:04 crc kubenswrapper[4898]: E0120 04:57:04.722586 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:57:19 crc kubenswrapper[4898]: I0120 04:57:19.721961 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:57:19 crc kubenswrapper[4898]: E0120 04:57:19.723137 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:57:34 crc kubenswrapper[4898]: I0120 04:57:34.722787 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:57:34 crc kubenswrapper[4898]: E0120 04:57:34.724200 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:57:49 crc kubenswrapper[4898]: I0120 04:57:49.723846 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:57:49 crc kubenswrapper[4898]: E0120 04:57:49.724562 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:58:03 crc kubenswrapper[4898]: I0120 04:58:03.726366 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:58:03 crc kubenswrapper[4898]: E0120 04:58:03.727103 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 04:58:16 crc kubenswrapper[4898]: I0120 04:58:16.724049 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 04:58:17 crc kubenswrapper[4898]: I0120 04:58:17.035395 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"0a00bbb5e895fc8536182d144344babc2920f75ab462b50d9646e1de5847c410"} Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.216421 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf"] Jan 20 05:00:00 crc kubenswrapper[4898]: E0120 05:00:00.217348 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerName="registry-server" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.217365 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerName="registry-server" Jan 20 05:00:00 crc kubenswrapper[4898]: E0120 05:00:00.217388 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be744aea-0dbb-459e-b826-cdc290d6ab24" containerName="extract-utilities" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.217398 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="be744aea-0dbb-459e-b826-cdc290d6ab24" containerName="extract-utilities" Jan 20 05:00:00 crc kubenswrapper[4898]: E0120 05:00:00.217415 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerName="extract-utilities" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.217425 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerName="extract-utilities" Jan 20 05:00:00 crc kubenswrapper[4898]: E0120 05:00:00.217454 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerName="extract-content" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.217461 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerName="extract-content" Jan 20 05:00:00 crc kubenswrapper[4898]: E0120 05:00:00.217488 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be744aea-0dbb-459e-b826-cdc290d6ab24" containerName="extract-content" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.217496 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="be744aea-0dbb-459e-b826-cdc290d6ab24" containerName="extract-content" Jan 20 05:00:00 crc kubenswrapper[4898]: E0120 05:00:00.217515 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be744aea-0dbb-459e-b826-cdc290d6ab24" containerName="registry-server" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.217523 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="be744aea-0dbb-459e-b826-cdc290d6ab24" containerName="registry-server" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.217751 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="be744aea-0dbb-459e-b826-cdc290d6ab24" containerName="registry-server" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.217773 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="c60b0384-3cd5-48c9-9bec-8bbecc4eb62e" containerName="registry-server" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.218567 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.230968 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf"] Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.272409 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.272688 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.396425 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7449ed39-7970-468d-a9ab-84a887243e1c-config-volume\") pod \"collect-profiles-29481420-bfvvf\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.396601 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7449ed39-7970-468d-a9ab-84a887243e1c-secret-volume\") pod \"collect-profiles-29481420-bfvvf\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.396740 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjz2s\" (UniqueName: \"kubernetes.io/projected/7449ed39-7970-468d-a9ab-84a887243e1c-kube-api-access-rjz2s\") pod \"collect-profiles-29481420-bfvvf\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.498242 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjz2s\" (UniqueName: \"kubernetes.io/projected/7449ed39-7970-468d-a9ab-84a887243e1c-kube-api-access-rjz2s\") pod \"collect-profiles-29481420-bfvvf\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.498293 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7449ed39-7970-468d-a9ab-84a887243e1c-config-volume\") pod \"collect-profiles-29481420-bfvvf\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.498361 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7449ed39-7970-468d-a9ab-84a887243e1c-secret-volume\") pod \"collect-profiles-29481420-bfvvf\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.499597 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7449ed39-7970-468d-a9ab-84a887243e1c-config-volume\") pod \"collect-profiles-29481420-bfvvf\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.504706 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7449ed39-7970-468d-a9ab-84a887243e1c-secret-volume\") pod \"collect-profiles-29481420-bfvvf\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.519035 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjz2s\" (UniqueName: \"kubernetes.io/projected/7449ed39-7970-468d-a9ab-84a887243e1c-kube-api-access-rjz2s\") pod \"collect-profiles-29481420-bfvvf\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.616186 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:00 crc kubenswrapper[4898]: I0120 05:00:00.834559 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf"] Jan 20 05:00:01 crc kubenswrapper[4898]: I0120 05:00:01.108407 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" event={"ID":"7449ed39-7970-468d-a9ab-84a887243e1c","Type":"ContainerStarted","Data":"015e84b2e2f412cf54a2265fb784fc8e79c70e291b6c5aefd06ac6ff1bed6178"} Jan 20 05:00:01 crc kubenswrapper[4898]: I0120 05:00:01.108928 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" event={"ID":"7449ed39-7970-468d-a9ab-84a887243e1c","Type":"ContainerStarted","Data":"7b2f5aa419902d5cf515560dff4cdee19bed1679eb19cb62bbfe4b3ebce731c4"} Jan 20 05:00:01 crc kubenswrapper[4898]: I0120 05:00:01.130074 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" podStartSLOduration=1.130050483 podStartE2EDuration="1.130050483s" podCreationTimestamp="2026-01-20 05:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 05:00:01.120606421 +0000 UTC m=+4247.720394300" watchObservedRunningTime="2026-01-20 05:00:01.130050483 +0000 UTC m=+4247.729838352" Jan 20 05:00:02 crc kubenswrapper[4898]: I0120 05:00:02.122604 4898 generic.go:334] "Generic (PLEG): container finished" podID="7449ed39-7970-468d-a9ab-84a887243e1c" containerID="015e84b2e2f412cf54a2265fb784fc8e79c70e291b6c5aefd06ac6ff1bed6178" exitCode=0 Jan 20 05:00:02 crc kubenswrapper[4898]: I0120 05:00:02.122689 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" event={"ID":"7449ed39-7970-468d-a9ab-84a887243e1c","Type":"ContainerDied","Data":"015e84b2e2f412cf54a2265fb784fc8e79c70e291b6c5aefd06ac6ff1bed6178"} Jan 20 05:00:03 crc kubenswrapper[4898]: I0120 05:00:03.511163 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:03 crc kubenswrapper[4898]: I0120 05:00:03.668251 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7449ed39-7970-468d-a9ab-84a887243e1c-config-volume\") pod \"7449ed39-7970-468d-a9ab-84a887243e1c\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " Jan 20 05:00:03 crc kubenswrapper[4898]: I0120 05:00:03.668542 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjz2s\" (UniqueName: \"kubernetes.io/projected/7449ed39-7970-468d-a9ab-84a887243e1c-kube-api-access-rjz2s\") pod \"7449ed39-7970-468d-a9ab-84a887243e1c\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " Jan 20 05:00:03 crc kubenswrapper[4898]: I0120 05:00:03.668714 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7449ed39-7970-468d-a9ab-84a887243e1c-secret-volume\") pod \"7449ed39-7970-468d-a9ab-84a887243e1c\" (UID: \"7449ed39-7970-468d-a9ab-84a887243e1c\") " Jan 20 05:00:03 crc kubenswrapper[4898]: I0120 05:00:03.669687 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7449ed39-7970-468d-a9ab-84a887243e1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "7449ed39-7970-468d-a9ab-84a887243e1c" (UID: "7449ed39-7970-468d-a9ab-84a887243e1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 05:00:03 crc kubenswrapper[4898]: I0120 05:00:03.676171 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7449ed39-7970-468d-a9ab-84a887243e1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7449ed39-7970-468d-a9ab-84a887243e1c" (UID: "7449ed39-7970-468d-a9ab-84a887243e1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 05:00:03 crc kubenswrapper[4898]: I0120 05:00:03.678625 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7449ed39-7970-468d-a9ab-84a887243e1c-kube-api-access-rjz2s" (OuterVolumeSpecName: "kube-api-access-rjz2s") pod "7449ed39-7970-468d-a9ab-84a887243e1c" (UID: "7449ed39-7970-468d-a9ab-84a887243e1c"). InnerVolumeSpecName "kube-api-access-rjz2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 05:00:03 crc kubenswrapper[4898]: I0120 05:00:03.771213 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjz2s\" (UniqueName: \"kubernetes.io/projected/7449ed39-7970-468d-a9ab-84a887243e1c-kube-api-access-rjz2s\") on node \"crc\" DevicePath \"\"" Jan 20 05:00:03 crc kubenswrapper[4898]: I0120 05:00:03.771244 4898 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7449ed39-7970-468d-a9ab-84a887243e1c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 05:00:03 crc kubenswrapper[4898]: I0120 05:00:03.771255 4898 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7449ed39-7970-468d-a9ab-84a887243e1c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 05:00:04 crc kubenswrapper[4898]: I0120 05:00:04.153878 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" event={"ID":"7449ed39-7970-468d-a9ab-84a887243e1c","Type":"ContainerDied","Data":"7b2f5aa419902d5cf515560dff4cdee19bed1679eb19cb62bbfe4b3ebce731c4"} Jan 20 05:00:04 crc kubenswrapper[4898]: I0120 05:00:04.153929 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b2f5aa419902d5cf515560dff4cdee19bed1679eb19cb62bbfe4b3ebce731c4" Jan 20 05:00:04 crc kubenswrapper[4898]: I0120 05:00:04.153991 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481420-bfvvf" Jan 20 05:00:04 crc kubenswrapper[4898]: I0120 05:00:04.610807 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl"] Jan 20 05:00:04 crc kubenswrapper[4898]: I0120 05:00:04.617759 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481375-qk2jl"] Jan 20 05:00:05 crc kubenswrapper[4898]: I0120 05:00:05.742393 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f241279c-d727-47c7-9cb8-3adf038b09d3" path="/var/lib/kubelet/pods/f241279c-d727-47c7-9cb8-3adf038b09d3/volumes" Jan 20 05:00:19 crc kubenswrapper[4898]: I0120 05:00:19.406685 4898 scope.go:117] "RemoveContainer" containerID="b37d51c7f71246b0ce8aeb7ae1395d0b2045f11d2428e300f4e7d77fc6cf7c8a" Jan 20 05:00:39 crc kubenswrapper[4898]: I0120 05:00:39.976003 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 05:00:39 crc kubenswrapper[4898]: I0120 05:00:39.976649 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.180204 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29481421-hm5dz"] Jan 20 05:01:00 crc kubenswrapper[4898]: E0120 05:01:00.181113 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7449ed39-7970-468d-a9ab-84a887243e1c" containerName="collect-profiles" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.181127 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="7449ed39-7970-468d-a9ab-84a887243e1c" containerName="collect-profiles" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.181379 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="7449ed39-7970-468d-a9ab-84a887243e1c" containerName="collect-profiles" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.182080 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.211597 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29481421-hm5dz"] Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.277419 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-combined-ca-bundle\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.277517 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8shk\" (UniqueName: \"kubernetes.io/projected/123681c9-37b6-4096-9a29-9547b9d33f01-kube-api-access-k8shk\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.277556 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-config-data\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.277660 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-fernet-keys\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.379560 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-fernet-keys\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.379709 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-combined-ca-bundle\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.379755 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8shk\" (UniqueName: \"kubernetes.io/projected/123681c9-37b6-4096-9a29-9547b9d33f01-kube-api-access-k8shk\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.379780 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-config-data\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.385661 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-config-data\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.386074 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-combined-ca-bundle\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.390017 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-fernet-keys\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.403538 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8shk\" (UniqueName: \"kubernetes.io/projected/123681c9-37b6-4096-9a29-9547b9d33f01-kube-api-access-k8shk\") pod \"keystone-cron-29481421-hm5dz\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.510507 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:00 crc kubenswrapper[4898]: I0120 05:01:00.965269 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29481421-hm5dz"] Jan 20 05:01:01 crc kubenswrapper[4898]: I0120 05:01:01.667276 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29481421-hm5dz" event={"ID":"123681c9-37b6-4096-9a29-9547b9d33f01","Type":"ContainerStarted","Data":"8a5dc5e48231158396e5770b904b1c8aaf514ab8338c7b1f71b1ce38a2a52382"} Jan 20 05:01:01 crc kubenswrapper[4898]: I0120 05:01:01.667601 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29481421-hm5dz" event={"ID":"123681c9-37b6-4096-9a29-9547b9d33f01","Type":"ContainerStarted","Data":"2efadf4c95d606fbf1413b54aa5bdd5cc9aa75fd52dac4d9e57a47ea83480670"} Jan 20 05:01:03 crc kubenswrapper[4898]: I0120 05:01:03.684565 4898 generic.go:334] "Generic (PLEG): container finished" podID="123681c9-37b6-4096-9a29-9547b9d33f01" containerID="8a5dc5e48231158396e5770b904b1c8aaf514ab8338c7b1f71b1ce38a2a52382" exitCode=0 Jan 20 05:01:03 crc kubenswrapper[4898]: I0120 05:01:03.684668 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29481421-hm5dz" event={"ID":"123681c9-37b6-4096-9a29-9547b9d33f01","Type":"ContainerDied","Data":"8a5dc5e48231158396e5770b904b1c8aaf514ab8338c7b1f71b1ce38a2a52382"} Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.073319 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.215049 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-fernet-keys\") pod \"123681c9-37b6-4096-9a29-9547b9d33f01\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.215163 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8shk\" (UniqueName: \"kubernetes.io/projected/123681c9-37b6-4096-9a29-9547b9d33f01-kube-api-access-k8shk\") pod \"123681c9-37b6-4096-9a29-9547b9d33f01\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.215401 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-combined-ca-bundle\") pod \"123681c9-37b6-4096-9a29-9547b9d33f01\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.215527 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-config-data\") pod \"123681c9-37b6-4096-9a29-9547b9d33f01\" (UID: \"123681c9-37b6-4096-9a29-9547b9d33f01\") " Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.221008 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123681c9-37b6-4096-9a29-9547b9d33f01-kube-api-access-k8shk" (OuterVolumeSpecName: "kube-api-access-k8shk") pod "123681c9-37b6-4096-9a29-9547b9d33f01" (UID: "123681c9-37b6-4096-9a29-9547b9d33f01"). InnerVolumeSpecName "kube-api-access-k8shk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.221623 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "123681c9-37b6-4096-9a29-9547b9d33f01" (UID: "123681c9-37b6-4096-9a29-9547b9d33f01"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.262095 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "123681c9-37b6-4096-9a29-9547b9d33f01" (UID: "123681c9-37b6-4096-9a29-9547b9d33f01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.276484 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-config-data" (OuterVolumeSpecName: "config-data") pod "123681c9-37b6-4096-9a29-9547b9d33f01" (UID: "123681c9-37b6-4096-9a29-9547b9d33f01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.317385 4898 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.317420 4898 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.317444 4898 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/123681c9-37b6-4096-9a29-9547b9d33f01-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.317457 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8shk\" (UniqueName: \"kubernetes.io/projected/123681c9-37b6-4096-9a29-9547b9d33f01-kube-api-access-k8shk\") on node \"crc\" DevicePath \"\"" Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.703188 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29481421-hm5dz" event={"ID":"123681c9-37b6-4096-9a29-9547b9d33f01","Type":"ContainerDied","Data":"2efadf4c95d606fbf1413b54aa5bdd5cc9aa75fd52dac4d9e57a47ea83480670"} Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.703227 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2efadf4c95d606fbf1413b54aa5bdd5cc9aa75fd52dac4d9e57a47ea83480670" Jan 20 05:01:05 crc kubenswrapper[4898]: I0120 05:01:05.703488 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29481421-hm5dz" Jan 20 05:01:09 crc kubenswrapper[4898]: I0120 05:01:09.976623 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 05:01:09 crc kubenswrapper[4898]: I0120 05:01:09.977184 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 05:01:39 crc kubenswrapper[4898]: I0120 05:01:39.976631 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 05:01:39 crc kubenswrapper[4898]: I0120 05:01:39.977374 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 05:01:39 crc kubenswrapper[4898]: I0120 05:01:39.977445 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 05:01:39 crc kubenswrapper[4898]: I0120 05:01:39.978551 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a00bbb5e895fc8536182d144344babc2920f75ab462b50d9646e1de5847c410"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 05:01:39 crc kubenswrapper[4898]: I0120 05:01:39.978656 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://0a00bbb5e895fc8536182d144344babc2920f75ab462b50d9646e1de5847c410" gracePeriod=600 Jan 20 05:01:41 crc kubenswrapper[4898]: I0120 05:01:41.060904 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="0a00bbb5e895fc8536182d144344babc2920f75ab462b50d9646e1de5847c410" exitCode=0 Jan 20 05:01:41 crc kubenswrapper[4898]: I0120 05:01:41.060937 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"0a00bbb5e895fc8536182d144344babc2920f75ab462b50d9646e1de5847c410"} Jan 20 05:01:41 crc kubenswrapper[4898]: I0120 05:01:41.061387 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c"} Jan 20 05:01:41 crc kubenswrapper[4898]: I0120 05:01:41.061409 4898 scope.go:117] "RemoveContainer" containerID="32affda2fe3050ff0e3209b5761ea8bdf732f4820a0b35086ea5a7eaeb3b8024" Jan 20 05:01:43 crc kubenswrapper[4898]: I0120 05:01:43.736475 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9k7gc"] Jan 20 05:01:43 crc kubenswrapper[4898]: E0120 05:01:43.737315 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123681c9-37b6-4096-9a29-9547b9d33f01" containerName="keystone-cron" Jan 20 05:01:43 crc kubenswrapper[4898]: I0120 05:01:43.737333 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="123681c9-37b6-4096-9a29-9547b9d33f01" containerName="keystone-cron" Jan 20 05:01:43 crc kubenswrapper[4898]: I0120 05:01:43.737609 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="123681c9-37b6-4096-9a29-9547b9d33f01" containerName="keystone-cron" Jan 20 05:01:43 crc kubenswrapper[4898]: I0120 05:01:43.743297 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:43 crc kubenswrapper[4898]: I0120 05:01:43.756848 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k7gc"] Jan 20 05:01:43 crc kubenswrapper[4898]: I0120 05:01:43.913992 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-catalog-content\") pod \"community-operators-9k7gc\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:43 crc kubenswrapper[4898]: I0120 05:01:43.914215 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmpmh\" (UniqueName: \"kubernetes.io/projected/a8301892-5514-4408-85b4-b16ae0f340e0-kube-api-access-vmpmh\") pod \"community-operators-9k7gc\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:43 crc kubenswrapper[4898]: I0120 05:01:43.914345 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-utilities\") pod \"community-operators-9k7gc\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:44 crc kubenswrapper[4898]: I0120 05:01:44.016425 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-catalog-content\") pod \"community-operators-9k7gc\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:44 crc kubenswrapper[4898]: I0120 05:01:44.016648 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmpmh\" (UniqueName: \"kubernetes.io/projected/a8301892-5514-4408-85b4-b16ae0f340e0-kube-api-access-vmpmh\") pod \"community-operators-9k7gc\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:44 crc kubenswrapper[4898]: I0120 05:01:44.016690 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-utilities\") pod \"community-operators-9k7gc\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:44 crc kubenswrapper[4898]: I0120 05:01:44.016906 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-catalog-content\") pod \"community-operators-9k7gc\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:44 crc kubenswrapper[4898]: I0120 05:01:44.017171 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-utilities\") pod \"community-operators-9k7gc\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:44 crc kubenswrapper[4898]: I0120 05:01:44.403790 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmpmh\" (UniqueName: \"kubernetes.io/projected/a8301892-5514-4408-85b4-b16ae0f340e0-kube-api-access-vmpmh\") pod \"community-operators-9k7gc\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:44 crc kubenswrapper[4898]: I0120 05:01:44.673748 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:45 crc kubenswrapper[4898]: I0120 05:01:45.145739 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k7gc"] Jan 20 05:01:46 crc kubenswrapper[4898]: I0120 05:01:46.112046 4898 generic.go:334] "Generic (PLEG): container finished" podID="a8301892-5514-4408-85b4-b16ae0f340e0" containerID="ead2b8898861cde82bdf1048cd5f20fbb051ca03e594206633f2edce1373e6ad" exitCode=0 Jan 20 05:01:46 crc kubenswrapper[4898]: I0120 05:01:46.112178 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7gc" event={"ID":"a8301892-5514-4408-85b4-b16ae0f340e0","Type":"ContainerDied","Data":"ead2b8898861cde82bdf1048cd5f20fbb051ca03e594206633f2edce1373e6ad"} Jan 20 05:01:46 crc kubenswrapper[4898]: I0120 05:01:46.112409 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7gc" event={"ID":"a8301892-5514-4408-85b4-b16ae0f340e0","Type":"ContainerStarted","Data":"6846dcb5c77bf4e75f91b8613ea50a1844bd32458a0d10ad2aabd354ba7b1c6e"} Jan 20 05:01:46 crc kubenswrapper[4898]: I0120 05:01:46.115255 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 05:01:48 crc kubenswrapper[4898]: I0120 05:01:48.133465 4898 generic.go:334] "Generic (PLEG): container finished" podID="a8301892-5514-4408-85b4-b16ae0f340e0" containerID="17e97204aae5bc6638d5352c0afab41c4559246957d0a50d451b6957fb3b9601" exitCode=0 Jan 20 05:01:48 crc kubenswrapper[4898]: I0120 05:01:48.133540 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7gc" event={"ID":"a8301892-5514-4408-85b4-b16ae0f340e0","Type":"ContainerDied","Data":"17e97204aae5bc6638d5352c0afab41c4559246957d0a50d451b6957fb3b9601"} Jan 20 05:01:49 crc kubenswrapper[4898]: I0120 05:01:49.147471 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7gc" event={"ID":"a8301892-5514-4408-85b4-b16ae0f340e0","Type":"ContainerStarted","Data":"31e532d8fd148325fa19c7e64acb94bc9400caac9c71fabfe0d9ea4f9ac99bf0"} Jan 20 05:01:49 crc kubenswrapper[4898]: I0120 05:01:49.176122 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9k7gc" podStartSLOduration=3.6876809059999998 podStartE2EDuration="6.17610394s" podCreationTimestamp="2026-01-20 05:01:43 +0000 UTC" firstStartedPulling="2026-01-20 05:01:46.11503108 +0000 UTC m=+4352.714818939" lastFinishedPulling="2026-01-20 05:01:48.603454114 +0000 UTC m=+4355.203241973" observedRunningTime="2026-01-20 05:01:49.167824895 +0000 UTC m=+4355.767612764" watchObservedRunningTime="2026-01-20 05:01:49.17610394 +0000 UTC m=+4355.775891799" Jan 20 05:01:54 crc kubenswrapper[4898]: I0120 05:01:54.674128 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:54 crc kubenswrapper[4898]: I0120 05:01:54.675008 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:54 crc kubenswrapper[4898]: I0120 05:01:54.739399 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:55 crc kubenswrapper[4898]: I0120 05:01:55.271998 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:55 crc kubenswrapper[4898]: I0120 05:01:55.335200 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k7gc"] Jan 20 05:01:57 crc kubenswrapper[4898]: I0120 05:01:57.222011 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9k7gc" podUID="a8301892-5514-4408-85b4-b16ae0f340e0" containerName="registry-server" containerID="cri-o://31e532d8fd148325fa19c7e64acb94bc9400caac9c71fabfe0d9ea4f9ac99bf0" gracePeriod=2 Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.241471 4898 generic.go:334] "Generic (PLEG): container finished" podID="a8301892-5514-4408-85b4-b16ae0f340e0" containerID="31e532d8fd148325fa19c7e64acb94bc9400caac9c71fabfe0d9ea4f9ac99bf0" exitCode=0 Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.241551 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7gc" event={"ID":"a8301892-5514-4408-85b4-b16ae0f340e0","Type":"ContainerDied","Data":"31e532d8fd148325fa19c7e64acb94bc9400caac9c71fabfe0d9ea4f9ac99bf0"} Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.242324 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7gc" event={"ID":"a8301892-5514-4408-85b4-b16ae0f340e0","Type":"ContainerDied","Data":"6846dcb5c77bf4e75f91b8613ea50a1844bd32458a0d10ad2aabd354ba7b1c6e"} Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.242349 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6846dcb5c77bf4e75f91b8613ea50a1844bd32458a0d10ad2aabd354ba7b1c6e" Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.311500 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.442244 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-catalog-content\") pod \"a8301892-5514-4408-85b4-b16ae0f340e0\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.442336 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-utilities\") pod \"a8301892-5514-4408-85b4-b16ae0f340e0\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.442494 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmpmh\" (UniqueName: \"kubernetes.io/projected/a8301892-5514-4408-85b4-b16ae0f340e0-kube-api-access-vmpmh\") pod \"a8301892-5514-4408-85b4-b16ae0f340e0\" (UID: \"a8301892-5514-4408-85b4-b16ae0f340e0\") " Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.443376 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-utilities" (OuterVolumeSpecName: "utilities") pod "a8301892-5514-4408-85b4-b16ae0f340e0" (UID: "a8301892-5514-4408-85b4-b16ae0f340e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.490600 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8301892-5514-4408-85b4-b16ae0f340e0" (UID: "a8301892-5514-4408-85b4-b16ae0f340e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.502574 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8301892-5514-4408-85b4-b16ae0f340e0-kube-api-access-vmpmh" (OuterVolumeSpecName: "kube-api-access-vmpmh") pod "a8301892-5514-4408-85b4-b16ae0f340e0" (UID: "a8301892-5514-4408-85b4-b16ae0f340e0"). InnerVolumeSpecName "kube-api-access-vmpmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.544940 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.545008 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmpmh\" (UniqueName: \"kubernetes.io/projected/a8301892-5514-4408-85b4-b16ae0f340e0-kube-api-access-vmpmh\") on node \"crc\" DevicePath \"\"" Jan 20 05:01:58 crc kubenswrapper[4898]: I0120 05:01:58.545028 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8301892-5514-4408-85b4-b16ae0f340e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 05:01:59 crc kubenswrapper[4898]: I0120 05:01:59.251056 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k7gc" Jan 20 05:01:59 crc kubenswrapper[4898]: I0120 05:01:59.304716 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k7gc"] Jan 20 05:01:59 crc kubenswrapper[4898]: I0120 05:01:59.318853 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9k7gc"] Jan 20 05:01:59 crc kubenswrapper[4898]: I0120 05:01:59.735408 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8301892-5514-4408-85b4-b16ae0f340e0" path="/var/lib/kubelet/pods/a8301892-5514-4408-85b4-b16ae0f340e0/volumes" Jan 20 05:04:09 crc kubenswrapper[4898]: I0120 05:04:09.976138 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 05:04:09 crc kubenswrapper[4898]: I0120 05:04:09.976816 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 05:04:39 crc kubenswrapper[4898]: I0120 05:04:39.976020 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 05:04:39 crc kubenswrapper[4898]: I0120 05:04:39.977024 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 05:05:09 crc kubenswrapper[4898]: I0120 05:05:09.975903 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 05:05:09 crc kubenswrapper[4898]: I0120 05:05:09.976659 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 05:05:09 crc kubenswrapper[4898]: I0120 05:05:09.976737 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 05:05:09 crc kubenswrapper[4898]: I0120 05:05:09.977804 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 05:05:09 crc kubenswrapper[4898]: I0120 05:05:09.977922 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" gracePeriod=600 Jan 20 05:05:10 crc kubenswrapper[4898]: E0120 05:05:10.119753 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:05:11 crc kubenswrapper[4898]: I0120 05:05:11.131425 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" exitCode=0 Jan 20 05:05:11 crc kubenswrapper[4898]: I0120 05:05:11.131842 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c"} Jan 20 05:05:11 crc kubenswrapper[4898]: I0120 05:05:11.131894 4898 scope.go:117] "RemoveContainer" containerID="0a00bbb5e895fc8536182d144344babc2920f75ab462b50d9646e1de5847c410" Jan 20 05:05:11 crc kubenswrapper[4898]: I0120 05:05:11.132921 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:05:11 crc kubenswrapper[4898]: E0120 05:05:11.133335 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:05:22 crc kubenswrapper[4898]: I0120 05:05:22.722231 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:05:22 crc kubenswrapper[4898]: E0120 05:05:22.723265 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.831088 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-765mp/must-gather-v7gff"] Jan 20 05:05:27 crc kubenswrapper[4898]: E0120 05:05:27.833506 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8301892-5514-4408-85b4-b16ae0f340e0" containerName="extract-content" Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.833525 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8301892-5514-4408-85b4-b16ae0f340e0" containerName="extract-content" Jan 20 05:05:27 crc kubenswrapper[4898]: E0120 05:05:27.833542 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8301892-5514-4408-85b4-b16ae0f340e0" containerName="registry-server" Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.833548 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8301892-5514-4408-85b4-b16ae0f340e0" containerName="registry-server" Jan 20 05:05:27 crc kubenswrapper[4898]: E0120 05:05:27.833564 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8301892-5514-4408-85b4-b16ae0f340e0" containerName="extract-utilities" Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.833570 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8301892-5514-4408-85b4-b16ae0f340e0" containerName="extract-utilities" Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.833732 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8301892-5514-4408-85b4-b16ae0f340e0" containerName="registry-server" Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.834690 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/must-gather-v7gff" Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.836274 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-765mp"/"default-dockercfg-59nwt" Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.843608 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-765mp/must-gather-v7gff"] Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.844411 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-765mp"/"openshift-service-ca.crt" Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.844849 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-765mp"/"kube-root-ca.crt" Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.976812 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0bf4f276-0eff-4c1c-83e3-005dc6004446-must-gather-output\") pod \"must-gather-v7gff\" (UID: \"0bf4f276-0eff-4c1c-83e3-005dc6004446\") " pod="openshift-must-gather-765mp/must-gather-v7gff" Jan 20 05:05:27 crc kubenswrapper[4898]: I0120 05:05:27.976860 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfv4q\" (UniqueName: \"kubernetes.io/projected/0bf4f276-0eff-4c1c-83e3-005dc6004446-kube-api-access-qfv4q\") pod \"must-gather-v7gff\" (UID: \"0bf4f276-0eff-4c1c-83e3-005dc6004446\") " pod="openshift-must-gather-765mp/must-gather-v7gff" Jan 20 05:05:28 crc kubenswrapper[4898]: I0120 05:05:28.078419 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0bf4f276-0eff-4c1c-83e3-005dc6004446-must-gather-output\") pod \"must-gather-v7gff\" (UID: \"0bf4f276-0eff-4c1c-83e3-005dc6004446\") " pod="openshift-must-gather-765mp/must-gather-v7gff" Jan 20 05:05:28 crc kubenswrapper[4898]: I0120 05:05:28.078726 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfv4q\" (UniqueName: \"kubernetes.io/projected/0bf4f276-0eff-4c1c-83e3-005dc6004446-kube-api-access-qfv4q\") pod \"must-gather-v7gff\" (UID: \"0bf4f276-0eff-4c1c-83e3-005dc6004446\") " pod="openshift-must-gather-765mp/must-gather-v7gff" Jan 20 05:05:28 crc kubenswrapper[4898]: I0120 05:05:28.078966 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0bf4f276-0eff-4c1c-83e3-005dc6004446-must-gather-output\") pod \"must-gather-v7gff\" (UID: \"0bf4f276-0eff-4c1c-83e3-005dc6004446\") " pod="openshift-must-gather-765mp/must-gather-v7gff" Jan 20 05:05:28 crc kubenswrapper[4898]: I0120 05:05:28.097526 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfv4q\" (UniqueName: \"kubernetes.io/projected/0bf4f276-0eff-4c1c-83e3-005dc6004446-kube-api-access-qfv4q\") pod \"must-gather-v7gff\" (UID: \"0bf4f276-0eff-4c1c-83e3-005dc6004446\") " pod="openshift-must-gather-765mp/must-gather-v7gff" Jan 20 05:05:28 crc kubenswrapper[4898]: I0120 05:05:28.150495 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/must-gather-v7gff" Jan 20 05:05:28 crc kubenswrapper[4898]: I0120 05:05:28.638687 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-765mp/must-gather-v7gff"] Jan 20 05:05:29 crc kubenswrapper[4898]: I0120 05:05:29.298851 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-765mp/must-gather-v7gff" event={"ID":"0bf4f276-0eff-4c1c-83e3-005dc6004446","Type":"ContainerStarted","Data":"936dfbf9bfa6b60c2423347fb452274a57f565eea9bc8d89fcc0a6be80733996"} Jan 20 05:05:36 crc kubenswrapper[4898]: I0120 05:05:36.370292 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-765mp/must-gather-v7gff" event={"ID":"0bf4f276-0eff-4c1c-83e3-005dc6004446","Type":"ContainerStarted","Data":"48d5299eb6257f76f558f3c69107be418daf1cb23eec2234770f23e73a4d5299"} Jan 20 05:05:36 crc kubenswrapper[4898]: I0120 05:05:36.370896 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-765mp/must-gather-v7gff" event={"ID":"0bf4f276-0eff-4c1c-83e3-005dc6004446","Type":"ContainerStarted","Data":"bcf2b2e7d01cfc72aa7d8e7048ccfc9dcb4f0ffeddc656d6ee4ca65144da5302"} Jan 20 05:05:36 crc kubenswrapper[4898]: I0120 05:05:36.387830 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-765mp/must-gather-v7gff" podStartSLOduration=3.180601173 podStartE2EDuration="9.387813529s" podCreationTimestamp="2026-01-20 05:05:27 +0000 UTC" firstStartedPulling="2026-01-20 05:05:29.211872994 +0000 UTC m=+4575.811660853" lastFinishedPulling="2026-01-20 05:05:35.41908535 +0000 UTC m=+4582.018873209" observedRunningTime="2026-01-20 05:05:36.386655013 +0000 UTC m=+4582.986442872" watchObservedRunningTime="2026-01-20 05:05:36.387813529 +0000 UTC m=+4582.987601398" Jan 20 05:05:37 crc kubenswrapper[4898]: I0120 05:05:37.721830 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:05:37 crc kubenswrapper[4898]: E0120 05:05:37.722688 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:05:39 crc kubenswrapper[4898]: I0120 05:05:39.595540 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-765mp/crc-debug-qpwcw"] Jan 20 05:05:39 crc kubenswrapper[4898]: I0120 05:05:39.597790 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/crc-debug-qpwcw" Jan 20 05:05:39 crc kubenswrapper[4898]: I0120 05:05:39.724766 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4bft\" (UniqueName: \"kubernetes.io/projected/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-kube-api-access-l4bft\") pod \"crc-debug-qpwcw\" (UID: \"d507d1cb-78d3-42e0-a3ce-36b9f75b619d\") " pod="openshift-must-gather-765mp/crc-debug-qpwcw" Jan 20 05:05:39 crc kubenswrapper[4898]: I0120 05:05:39.725304 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-host\") pod \"crc-debug-qpwcw\" (UID: \"d507d1cb-78d3-42e0-a3ce-36b9f75b619d\") " pod="openshift-must-gather-765mp/crc-debug-qpwcw" Jan 20 05:05:39 crc kubenswrapper[4898]: I0120 05:05:39.827247 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-host\") pod \"crc-debug-qpwcw\" (UID: \"d507d1cb-78d3-42e0-a3ce-36b9f75b619d\") " pod="openshift-must-gather-765mp/crc-debug-qpwcw" Jan 20 05:05:39 crc kubenswrapper[4898]: I0120 05:05:39.827446 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4bft\" (UniqueName: \"kubernetes.io/projected/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-kube-api-access-l4bft\") pod \"crc-debug-qpwcw\" (UID: \"d507d1cb-78d3-42e0-a3ce-36b9f75b619d\") " pod="openshift-must-gather-765mp/crc-debug-qpwcw" Jan 20 05:05:39 crc kubenswrapper[4898]: I0120 05:05:39.829162 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-host\") pod \"crc-debug-qpwcw\" (UID: \"d507d1cb-78d3-42e0-a3ce-36b9f75b619d\") " pod="openshift-must-gather-765mp/crc-debug-qpwcw" Jan 20 05:05:39 crc kubenswrapper[4898]: I0120 05:05:39.858588 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4bft\" (UniqueName: \"kubernetes.io/projected/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-kube-api-access-l4bft\") pod \"crc-debug-qpwcw\" (UID: \"d507d1cb-78d3-42e0-a3ce-36b9f75b619d\") " pod="openshift-must-gather-765mp/crc-debug-qpwcw" Jan 20 05:05:39 crc kubenswrapper[4898]: I0120 05:05:39.918058 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/crc-debug-qpwcw" Jan 20 05:05:39 crc kubenswrapper[4898]: W0120 05:05:39.952280 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd507d1cb_78d3_42e0_a3ce_36b9f75b619d.slice/crio-d219f15d72e130ea9d681ae7eb27453a765033fe8c84dc9d85fbed1c667e83a1 WatchSource:0}: Error finding container d219f15d72e130ea9d681ae7eb27453a765033fe8c84dc9d85fbed1c667e83a1: Status 404 returned error can't find the container with id d219f15d72e130ea9d681ae7eb27453a765033fe8c84dc9d85fbed1c667e83a1 Jan 20 05:05:40 crc kubenswrapper[4898]: I0120 05:05:40.402189 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-765mp/crc-debug-qpwcw" event={"ID":"d507d1cb-78d3-42e0-a3ce-36b9f75b619d","Type":"ContainerStarted","Data":"d219f15d72e130ea9d681ae7eb27453a765033fe8c84dc9d85fbed1c667e83a1"} Jan 20 05:05:49 crc kubenswrapper[4898]: I0120 05:05:49.721654 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:05:49 crc kubenswrapper[4898]: E0120 05:05:49.722755 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:05:51 crc kubenswrapper[4898]: I0120 05:05:51.496387 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-765mp/crc-debug-qpwcw" event={"ID":"d507d1cb-78d3-42e0-a3ce-36b9f75b619d","Type":"ContainerStarted","Data":"721b62c6a98461e309efd1422699c1dd3c79fdb2a6a6f2c8f38eb904d3c3a85b"} Jan 20 05:05:51 crc kubenswrapper[4898]: I0120 05:05:51.517428 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-765mp/crc-debug-qpwcw" podStartSLOduration=2.127488471 podStartE2EDuration="12.51740563s" podCreationTimestamp="2026-01-20 05:05:39 +0000 UTC" firstStartedPulling="2026-01-20 05:05:39.95502084 +0000 UTC m=+4586.554808699" lastFinishedPulling="2026-01-20 05:05:50.344937999 +0000 UTC m=+4596.944725858" observedRunningTime="2026-01-20 05:05:51.511794057 +0000 UTC m=+4598.111581916" watchObservedRunningTime="2026-01-20 05:05:51.51740563 +0000 UTC m=+4598.117193489" Jan 20 05:06:00 crc kubenswrapper[4898]: I0120 05:06:00.722272 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:06:00 crc kubenswrapper[4898]: E0120 05:06:00.723095 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:06:06 crc kubenswrapper[4898]: I0120 05:06:06.615768 4898 generic.go:334] "Generic (PLEG): container finished" podID="d507d1cb-78d3-42e0-a3ce-36b9f75b619d" containerID="721b62c6a98461e309efd1422699c1dd3c79fdb2a6a6f2c8f38eb904d3c3a85b" exitCode=0 Jan 20 05:06:06 crc kubenswrapper[4898]: I0120 05:06:06.615848 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-765mp/crc-debug-qpwcw" event={"ID":"d507d1cb-78d3-42e0-a3ce-36b9f75b619d","Type":"ContainerDied","Data":"721b62c6a98461e309efd1422699c1dd3c79fdb2a6a6f2c8f38eb904d3c3a85b"} Jan 20 05:06:07 crc kubenswrapper[4898]: I0120 05:06:07.722389 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/crc-debug-qpwcw" Jan 20 05:06:07 crc kubenswrapper[4898]: I0120 05:06:07.774561 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-765mp/crc-debug-qpwcw"] Jan 20 05:06:07 crc kubenswrapper[4898]: I0120 05:06:07.802652 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-765mp/crc-debug-qpwcw"] Jan 20 05:06:07 crc kubenswrapper[4898]: I0120 05:06:07.849228 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4bft\" (UniqueName: \"kubernetes.io/projected/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-kube-api-access-l4bft\") pod \"d507d1cb-78d3-42e0-a3ce-36b9f75b619d\" (UID: \"d507d1cb-78d3-42e0-a3ce-36b9f75b619d\") " Jan 20 05:06:07 crc kubenswrapper[4898]: I0120 05:06:07.849908 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-host\") pod \"d507d1cb-78d3-42e0-a3ce-36b9f75b619d\" (UID: \"d507d1cb-78d3-42e0-a3ce-36b9f75b619d\") " Jan 20 05:06:07 crc kubenswrapper[4898]: I0120 05:06:07.850073 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-host" (OuterVolumeSpecName: "host") pod "d507d1cb-78d3-42e0-a3ce-36b9f75b619d" (UID: "d507d1cb-78d3-42e0-a3ce-36b9f75b619d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 05:06:07 crc kubenswrapper[4898]: I0120 05:06:07.850741 4898 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-host\") on node \"crc\" DevicePath \"\"" Jan 20 05:06:07 crc kubenswrapper[4898]: I0120 05:06:07.867189 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-kube-api-access-l4bft" (OuterVolumeSpecName: "kube-api-access-l4bft") pod "d507d1cb-78d3-42e0-a3ce-36b9f75b619d" (UID: "d507d1cb-78d3-42e0-a3ce-36b9f75b619d"). InnerVolumeSpecName "kube-api-access-l4bft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 05:06:07 crc kubenswrapper[4898]: I0120 05:06:07.953259 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4bft\" (UniqueName: \"kubernetes.io/projected/d507d1cb-78d3-42e0-a3ce-36b9f75b619d-kube-api-access-l4bft\") on node \"crc\" DevicePath \"\"" Jan 20 05:06:08 crc kubenswrapper[4898]: I0120 05:06:08.634869 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d219f15d72e130ea9d681ae7eb27453a765033fe8c84dc9d85fbed1c667e83a1" Jan 20 05:06:08 crc kubenswrapper[4898]: I0120 05:06:08.634938 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/crc-debug-qpwcw" Jan 20 05:06:08 crc kubenswrapper[4898]: I0120 05:06:08.974602 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-765mp/crc-debug-nnb6n"] Jan 20 05:06:08 crc kubenswrapper[4898]: E0120 05:06:08.975030 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d507d1cb-78d3-42e0-a3ce-36b9f75b619d" containerName="container-00" Jan 20 05:06:08 crc kubenswrapper[4898]: I0120 05:06:08.975046 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="d507d1cb-78d3-42e0-a3ce-36b9f75b619d" containerName="container-00" Jan 20 05:06:08 crc kubenswrapper[4898]: I0120 05:06:08.975206 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="d507d1cb-78d3-42e0-a3ce-36b9f75b619d" containerName="container-00" Jan 20 05:06:08 crc kubenswrapper[4898]: I0120 05:06:08.975797 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/crc-debug-nnb6n" Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.072368 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgv5f\" (UniqueName: \"kubernetes.io/projected/b8bfa27c-9dea-4072-a25a-928fda40149b-kube-api-access-mgv5f\") pod \"crc-debug-nnb6n\" (UID: \"b8bfa27c-9dea-4072-a25a-928fda40149b\") " pod="openshift-must-gather-765mp/crc-debug-nnb6n" Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.072892 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8bfa27c-9dea-4072-a25a-928fda40149b-host\") pod \"crc-debug-nnb6n\" (UID: \"b8bfa27c-9dea-4072-a25a-928fda40149b\") " pod="openshift-must-gather-765mp/crc-debug-nnb6n" Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.175227 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgv5f\" (UniqueName: \"kubernetes.io/projected/b8bfa27c-9dea-4072-a25a-928fda40149b-kube-api-access-mgv5f\") pod \"crc-debug-nnb6n\" (UID: \"b8bfa27c-9dea-4072-a25a-928fda40149b\") " pod="openshift-must-gather-765mp/crc-debug-nnb6n" Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.175318 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8bfa27c-9dea-4072-a25a-928fda40149b-host\") pod \"crc-debug-nnb6n\" (UID: \"b8bfa27c-9dea-4072-a25a-928fda40149b\") " pod="openshift-must-gather-765mp/crc-debug-nnb6n" Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.175468 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8bfa27c-9dea-4072-a25a-928fda40149b-host\") pod \"crc-debug-nnb6n\" (UID: \"b8bfa27c-9dea-4072-a25a-928fda40149b\") " pod="openshift-must-gather-765mp/crc-debug-nnb6n" Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.205918 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgv5f\" (UniqueName: \"kubernetes.io/projected/b8bfa27c-9dea-4072-a25a-928fda40149b-kube-api-access-mgv5f\") pod \"crc-debug-nnb6n\" (UID: \"b8bfa27c-9dea-4072-a25a-928fda40149b\") " pod="openshift-must-gather-765mp/crc-debug-nnb6n" Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.296186 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/crc-debug-nnb6n" Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.643306 4898 generic.go:334] "Generic (PLEG): container finished" podID="b8bfa27c-9dea-4072-a25a-928fda40149b" containerID="29dc4ad13e457e9f2b9da468aba01575e7722c2fe293554e35b608f7393e3caa" exitCode=1 Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.643685 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-765mp/crc-debug-nnb6n" event={"ID":"b8bfa27c-9dea-4072-a25a-928fda40149b","Type":"ContainerDied","Data":"29dc4ad13e457e9f2b9da468aba01575e7722c2fe293554e35b608f7393e3caa"} Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.643715 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-765mp/crc-debug-nnb6n" event={"ID":"b8bfa27c-9dea-4072-a25a-928fda40149b","Type":"ContainerStarted","Data":"288c821ee572a9545ad60adea726cbe441bc879e1aa6faf2a9dacc2bcc1101ae"} Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.677093 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-765mp/crc-debug-nnb6n"] Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.684803 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-765mp/crc-debug-nnb6n"] Jan 20 05:06:09 crc kubenswrapper[4898]: I0120 05:06:09.735379 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d507d1cb-78d3-42e0-a3ce-36b9f75b619d" path="/var/lib/kubelet/pods/d507d1cb-78d3-42e0-a3ce-36b9f75b619d/volumes" Jan 20 05:06:10 crc kubenswrapper[4898]: I0120 05:06:10.739331 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/crc-debug-nnb6n" Jan 20 05:06:10 crc kubenswrapper[4898]: I0120 05:06:10.802001 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8bfa27c-9dea-4072-a25a-928fda40149b-host\") pod \"b8bfa27c-9dea-4072-a25a-928fda40149b\" (UID: \"b8bfa27c-9dea-4072-a25a-928fda40149b\") " Jan 20 05:06:10 crc kubenswrapper[4898]: I0120 05:06:10.802074 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgv5f\" (UniqueName: \"kubernetes.io/projected/b8bfa27c-9dea-4072-a25a-928fda40149b-kube-api-access-mgv5f\") pod \"b8bfa27c-9dea-4072-a25a-928fda40149b\" (UID: \"b8bfa27c-9dea-4072-a25a-928fda40149b\") " Jan 20 05:06:10 crc kubenswrapper[4898]: I0120 05:06:10.802259 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8bfa27c-9dea-4072-a25a-928fda40149b-host" (OuterVolumeSpecName: "host") pod "b8bfa27c-9dea-4072-a25a-928fda40149b" (UID: "b8bfa27c-9dea-4072-a25a-928fda40149b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 05:06:10 crc kubenswrapper[4898]: I0120 05:06:10.802595 4898 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8bfa27c-9dea-4072-a25a-928fda40149b-host\") on node \"crc\" DevicePath \"\"" Jan 20 05:06:10 crc kubenswrapper[4898]: I0120 05:06:10.808116 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bfa27c-9dea-4072-a25a-928fda40149b-kube-api-access-mgv5f" (OuterVolumeSpecName: "kube-api-access-mgv5f") pod "b8bfa27c-9dea-4072-a25a-928fda40149b" (UID: "b8bfa27c-9dea-4072-a25a-928fda40149b"). InnerVolumeSpecName "kube-api-access-mgv5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 05:06:10 crc kubenswrapper[4898]: I0120 05:06:10.904333 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgv5f\" (UniqueName: \"kubernetes.io/projected/b8bfa27c-9dea-4072-a25a-928fda40149b-kube-api-access-mgv5f\") on node \"crc\" DevicePath \"\"" Jan 20 05:06:11 crc kubenswrapper[4898]: I0120 05:06:11.685397 4898 scope.go:117] "RemoveContainer" containerID="29dc4ad13e457e9f2b9da468aba01575e7722c2fe293554e35b608f7393e3caa" Jan 20 05:06:11 crc kubenswrapper[4898]: I0120 05:06:11.685575 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/crc-debug-nnb6n" Jan 20 05:06:11 crc kubenswrapper[4898]: I0120 05:06:11.732701 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bfa27c-9dea-4072-a25a-928fda40149b" path="/var/lib/kubelet/pods/b8bfa27c-9dea-4072-a25a-928fda40149b/volumes" Jan 20 05:06:12 crc kubenswrapper[4898]: I0120 05:06:12.721927 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:06:12 crc kubenswrapper[4898]: E0120 05:06:12.722192 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:06:26 crc kubenswrapper[4898]: I0120 05:06:26.721230 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:06:26 crc kubenswrapper[4898]: E0120 05:06:26.722114 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:06:38 crc kubenswrapper[4898]: I0120 05:06:38.759642 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:06:38 crc kubenswrapper[4898]: E0120 05:06:38.761174 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:06:41 crc kubenswrapper[4898]: I0120 05:06:41.405979 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-668d8597cb-gql24_5b8611ec-0af8-4f71-86ac-f2b2f16f10ed/barbican-api-log/0.log" Jan 20 05:06:41 crc kubenswrapper[4898]: I0120 05:06:41.443302 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-668d8597cb-gql24_5b8611ec-0af8-4f71-86ac-f2b2f16f10ed/barbican-api/0.log" Jan 20 05:06:41 crc kubenswrapper[4898]: I0120 05:06:41.560650 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b98946766-bzxqb_87c82f48-c250-400b-b1b0-00a613cbd1e7/barbican-keystone-listener/0.log" Jan 20 05:06:41 crc kubenswrapper[4898]: I0120 05:06:41.650396 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-646c85868c-g2f2w_85d50ce2-27f1-4ae8-8612-647c1856e03e/barbican-worker/0.log" Jan 20 05:06:41 crc kubenswrapper[4898]: I0120 05:06:41.664538 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b98946766-bzxqb_87c82f48-c250-400b-b1b0-00a613cbd1e7/barbican-keystone-listener-log/0.log" Jan 20 05:06:41 crc kubenswrapper[4898]: I0120 05:06:41.745451 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-646c85868c-g2f2w_85d50ce2-27f1-4ae8-8612-647c1856e03e/barbican-worker-log/0.log" Jan 20 05:06:41 crc kubenswrapper[4898]: I0120 05:06:41.860079 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0/ceilometer-central-agent/0.log" Jan 20 05:06:41 crc kubenswrapper[4898]: I0120 05:06:41.902998 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0/ceilometer-notification-agent/0.log" Jan 20 05:06:41 crc kubenswrapper[4898]: I0120 05:06:41.961914 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0/proxy-httpd/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.007577 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8fd30d57-bc56-4a2f-aba3-b9208fc1b0a0/sg-core/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.168399 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bf226116-7c6f-473d-9ffe-14cb5e7bbdc5/cinder-api-log/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.171183 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bf226116-7c6f-473d-9ffe-14cb5e7bbdc5/cinder-api/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.277803 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6b9d909f-718d-4eb5-8321-f1f20f54e2a4/cinder-scheduler/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.368847 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6b9d909f-718d-4eb5-8321-f1f20f54e2a4/probe/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.416901 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-9cg5d_45b57056-2421-4715-a5ec-3c8f74566387/init/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.600808 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-9cg5d_45b57056-2421-4715-a5ec-3c8f74566387/init/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.601921 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-9cg5d_45b57056-2421-4715-a5ec-3c8f74566387/dnsmasq-dns/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.620534 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_878e8a52-a939-4b57-b229-d72049650611/glance-httpd/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.806310 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_878e8a52-a939-4b57-b229-d72049650611/glance-log/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.827202 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_db5d1c2c-5f41-4b41-8503-72518de5ba3a/glance-httpd/0.log" Jan 20 05:06:42 crc kubenswrapper[4898]: I0120 05:06:42.849567 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_db5d1c2c-5f41-4b41-8503-72518de5ba3a/glance-log/0.log" Jan 20 05:06:43 crc kubenswrapper[4898]: I0120 05:06:43.075289 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29481421-hm5dz_123681c9-37b6-4096-9a29-9547b9d33f01/keystone-cron/0.log" Jan 20 05:06:43 crc kubenswrapper[4898]: I0120 05:06:43.081972 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-8684dcb884-x94mz_ad336ef7-2c6d-46c5-b7ae-996366226bc5/keystone-api/0.log" Jan 20 05:06:43 crc kubenswrapper[4898]: I0120 05:06:43.249514 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_035f57fe-ac66-4b46-93d4-26575736e9bb/kube-state-metrics/0.log" Jan 20 05:06:43 crc kubenswrapper[4898]: I0120 05:06:43.462351 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6885b44c89-c7rr5_3ab69a28-3e73-467a-a062-d881466b26a6/neutron-api/0.log" Jan 20 05:06:43 crc kubenswrapper[4898]: I0120 05:06:43.749644 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6885b44c89-c7rr5_3ab69a28-3e73-467a-a062-d881466b26a6/neutron-httpd/0.log" Jan 20 05:06:44 crc kubenswrapper[4898]: I0120 05:06:44.199278 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d8b34b96-1828-4d70-82bd-3cc6c02f76a9/nova-api-log/0.log" Jan 20 05:06:44 crc kubenswrapper[4898]: I0120 05:06:44.398716 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d8b34b96-1828-4d70-82bd-3cc6c02f76a9/nova-api-api/0.log" Jan 20 05:06:44 crc kubenswrapper[4898]: I0120 05:06:44.687916 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7d04727f-3596-48ce-a3d5-d4d0deb8fe89/nova-cell0-conductor-conductor/0.log" Jan 20 05:06:44 crc kubenswrapper[4898]: I0120 05:06:44.781813 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_0f36a422-fdc8-447c-8abb-8deabac9c903/nova-cell1-conductor-conductor/0.log" Jan 20 05:06:45 crc kubenswrapper[4898]: I0120 05:06:45.007266 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5e24c3c5-6665-47c4-a82b-0b2c0da055ea/nova-cell1-novncproxy-novncproxy/0.log" Jan 20 05:06:45 crc kubenswrapper[4898]: I0120 05:06:45.229130 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0a46086c-f810-4cb5-aef8-8d12bb3d292f/nova-metadata-log/0.log" Jan 20 05:06:45 crc kubenswrapper[4898]: I0120 05:06:45.430174 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_899ce7b8-38d0-4d36-8d05-7ee9fe0599b3/nova-scheduler-scheduler/0.log" Jan 20 05:06:45 crc kubenswrapper[4898]: I0120 05:06:45.494808 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f726d262-f94d-4ff3-a4ae-a51076898b72/mysql-bootstrap/0.log" Jan 20 05:06:45 crc kubenswrapper[4898]: I0120 05:06:45.670419 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f726d262-f94d-4ff3-a4ae-a51076898b72/mysql-bootstrap/0.log" Jan 20 05:06:45 crc kubenswrapper[4898]: I0120 05:06:45.726973 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f726d262-f94d-4ff3-a4ae-a51076898b72/galera/0.log" Jan 20 05:06:45 crc kubenswrapper[4898]: I0120 05:06:45.892352 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ec4f8f5c-5a5e-4c01-a81a-567a6e62176d/mysql-bootstrap/0.log" Jan 20 05:06:46 crc kubenswrapper[4898]: I0120 05:06:46.157571 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ec4f8f5c-5a5e-4c01-a81a-567a6e62176d/galera/0.log" Jan 20 05:06:46 crc kubenswrapper[4898]: I0120 05:06:46.158577 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ec4f8f5c-5a5e-4c01-a81a-567a6e62176d/mysql-bootstrap/0.log" Jan 20 05:06:46 crc kubenswrapper[4898]: I0120 05:06:46.388816 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_37019a69-852b-47d8-8090-db3f78bbf2a5/openstackclient/0.log" Jan 20 05:06:46 crc kubenswrapper[4898]: I0120 05:06:46.438554 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ln6nh_a6903c19-3320-443c-8713-105a39a65527/ovn-controller/0.log" Jan 20 05:06:46 crc kubenswrapper[4898]: I0120 05:06:46.602015 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-7hcw6_4387ced7-ff2c-480f-826c-5765f3a17162/openstack-network-exporter/0.log" Jan 20 05:06:46 crc kubenswrapper[4898]: I0120 05:06:46.831321 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9mdd5_ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef/ovsdb-server-init/0.log" Jan 20 05:06:46 crc kubenswrapper[4898]: I0120 05:06:46.864729 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0a46086c-f810-4cb5-aef8-8d12bb3d292f/nova-metadata-metadata/0.log" Jan 20 05:06:46 crc kubenswrapper[4898]: I0120 05:06:46.928063 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9mdd5_ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef/ovs-vswitchd/0.log" Jan 20 05:06:46 crc kubenswrapper[4898]: I0120 05:06:46.964018 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9mdd5_ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef/ovsdb-server-init/0.log" Jan 20 05:06:47 crc kubenswrapper[4898]: I0120 05:06:47.027565 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9mdd5_ed6e242c-cbda-4cd8-a1e9-9c9f7c3b75ef/ovsdb-server/0.log" Jan 20 05:06:47 crc kubenswrapper[4898]: I0120 05:06:47.117369 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7358529c-1249-443a-b295-bf0250c63af1/openstack-network-exporter/0.log" Jan 20 05:06:47 crc kubenswrapper[4898]: I0120 05:06:47.182257 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7358529c-1249-443a-b295-bf0250c63af1/ovn-northd/0.log" Jan 20 05:06:47 crc kubenswrapper[4898]: I0120 05:06:47.321331 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8a20df64-f80d-4506-bcf4-2cdcc1eee607/openstack-network-exporter/0.log" Jan 20 05:06:47 crc kubenswrapper[4898]: I0120 05:06:47.330792 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8a20df64-f80d-4506-bcf4-2cdcc1eee607/ovsdbserver-nb/0.log" Jan 20 05:06:47 crc kubenswrapper[4898]: I0120 05:06:47.513061 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_82d86080-ab0b-4b48-9847-ead3c4bcc6c4/ovsdbserver-sb/0.log" Jan 20 05:06:47 crc kubenswrapper[4898]: I0120 05:06:47.564974 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_82d86080-ab0b-4b48-9847-ead3c4bcc6c4/openstack-network-exporter/0.log" Jan 20 05:06:47 crc kubenswrapper[4898]: I0120 05:06:47.694344 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-659876b84-8djq9_83dbc10c-d5eb-435e-97b2-3b615b6e4e10/placement-api/0.log" Jan 20 05:06:47 crc kubenswrapper[4898]: I0120 05:06:47.774343 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a1f422f4-afd1-4794-85b1-cb82712e004a/setup-container/0.log" Jan 20 05:06:47 crc kubenswrapper[4898]: I0120 05:06:47.785230 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-659876b84-8djq9_83dbc10c-d5eb-435e-97b2-3b615b6e4e10/placement-log/0.log" Jan 20 05:06:48 crc kubenswrapper[4898]: I0120 05:06:48.446055 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a1f422f4-afd1-4794-85b1-cb82712e004a/setup-container/0.log" Jan 20 05:06:48 crc kubenswrapper[4898]: I0120 05:06:48.699021 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a1f422f4-afd1-4794-85b1-cb82712e004a/rabbitmq/0.log" Jan 20 05:06:48 crc kubenswrapper[4898]: I0120 05:06:48.758368 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48/setup-container/0.log" Jan 20 05:06:48 crc kubenswrapper[4898]: I0120 05:06:48.912046 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48/setup-container/0.log" Jan 20 05:06:48 crc kubenswrapper[4898]: I0120 05:06:48.936964 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1b2bd39c-8c86-45bf-b37e-5a73ea3e2d48/rabbitmq/0.log" Jan 20 05:06:49 crc kubenswrapper[4898]: I0120 05:06:49.103168 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-659bc66b4c-5cnqm_9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a/proxy-server/0.log" Jan 20 05:06:49 crc kubenswrapper[4898]: I0120 05:06:49.221241 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-659bc66b4c-5cnqm_9eeb9c6f-2ac9-45bb-a885-26da22cd5d2a/proxy-httpd/0.log" Jan 20 05:06:49 crc kubenswrapper[4898]: I0120 05:06:49.231770 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-smm2m_77d37150-8bcb-46ff-9b40-aa959b7993d2/swift-ring-rebalance/0.log" Jan 20 05:06:49 crc kubenswrapper[4898]: I0120 05:06:49.341163 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/account-auditor/0.log" Jan 20 05:06:49 crc kubenswrapper[4898]: I0120 05:06:49.479620 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/account-replicator/0.log" Jan 20 05:06:49 crc kubenswrapper[4898]: I0120 05:06:49.491310 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/account-reaper/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.099545 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/account-server/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.104316 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/container-auditor/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.151051 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/container-replicator/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.156864 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/container-server/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.304941 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/container-updater/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.321801 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/object-auditor/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.360691 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/object-replicator/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.377695 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/object-expirer/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.469129 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/object-server/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.532398 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/rsync/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.588414 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/object-updater/0.log" Jan 20 05:06:50 crc kubenswrapper[4898]: I0120 05:06:50.632698 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_311bb6c9-9ed4-4d8b-8b0f-839ecf2bfb5b/swift-recon-cron/0.log" Jan 20 05:06:51 crc kubenswrapper[4898]: I0120 05:06:51.721218 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:06:51 crc kubenswrapper[4898]: E0120 05:06:51.721753 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:06:54 crc kubenswrapper[4898]: I0120 05:06:54.203769 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7f47b2f9-88d3-43e4-9f9c-da4340a63519/memcached/0.log" Jan 20 05:07:03 crc kubenswrapper[4898]: I0120 05:07:03.731273 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:07:03 crc kubenswrapper[4898]: E0120 05:07:03.732547 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:07:12 crc kubenswrapper[4898]: I0120 05:07:12.112765 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp_3850c7f3-e99b-4c2b-b0cd-bfc05057051a/util/0.log" Jan 20 05:07:12 crc kubenswrapper[4898]: I0120 05:07:12.292805 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp_3850c7f3-e99b-4c2b-b0cd-bfc05057051a/pull/0.log" Jan 20 05:07:12 crc kubenswrapper[4898]: I0120 05:07:12.300392 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp_3850c7f3-e99b-4c2b-b0cd-bfc05057051a/util/0.log" Jan 20 05:07:12 crc kubenswrapper[4898]: I0120 05:07:12.379847 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp_3850c7f3-e99b-4c2b-b0cd-bfc05057051a/pull/0.log" Jan 20 05:07:12 crc kubenswrapper[4898]: I0120 05:07:12.526632 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp_3850c7f3-e99b-4c2b-b0cd-bfc05057051a/pull/0.log" Jan 20 05:07:12 crc kubenswrapper[4898]: I0120 05:07:12.528354 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp_3850c7f3-e99b-4c2b-b0cd-bfc05057051a/extract/0.log" Jan 20 05:07:12 crc kubenswrapper[4898]: I0120 05:07:12.541035 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_92e7d1d357df9bc0c970618c296436e779155f094aa7184a25a1439bb2hjmwp_3850c7f3-e99b-4c2b-b0cd-bfc05057051a/util/0.log" Jan 20 05:07:12 crc kubenswrapper[4898]: I0120 05:07:12.737804 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-rcbnl_25439b6f-5a2a-4577-afd8-787d44877848/manager/0.log" Jan 20 05:07:12 crc kubenswrapper[4898]: I0120 05:07:12.786986 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-lvhpk_765cfe55-b774-4345-8884-ca22330cf340/manager/0.log" Jan 20 05:07:12 crc kubenswrapper[4898]: I0120 05:07:12.909849 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-tnrj8_0389b651-5be7-45a2-bba7-d204285978e7/manager/0.log" Jan 20 05:07:13 crc kubenswrapper[4898]: I0120 05:07:13.055030 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-mzmtp_0b29f776-9321-4967-bf9b-6fe35cf6c195/manager/0.log" Jan 20 05:07:13 crc kubenswrapper[4898]: I0120 05:07:13.138622 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d87976b78-jpmbw_d5cf00c9-700f-4f7b-98e1-626fdc638e32/manager/0.log" Jan 20 05:07:13 crc kubenswrapper[4898]: I0120 05:07:13.211039 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-vhlsd_1458adb7-c7bd-4a1d-8d55-1f9b9cd98e14/manager/0.log" Jan 20 05:07:13 crc kubenswrapper[4898]: I0120 05:07:13.444501 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-7wx4b_45d870c0-6af6-4cb0-9704-2ffafb2c423c/manager/0.log" Jan 20 05:07:13 crc kubenswrapper[4898]: I0120 05:07:13.516636 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-pzzk6_c88cac18-f08f-4ad2-8bf2-21d27972223a/manager/0.log" Jan 20 05:07:13 crc kubenswrapper[4898]: I0120 05:07:13.647950 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-2vrjf_1aeab6f0-46f5-41ac-a3b7-4d428ab7c321/manager/0.log" Jan 20 05:07:13 crc kubenswrapper[4898]: I0120 05:07:13.704021 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-bzjfx_3501119f-a33c-4069-bd8d-fe5fb5ef021b/manager/0.log" Jan 20 05:07:13 crc kubenswrapper[4898]: I0120 05:07:13.868828 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-mc2n6_ee913f4e-9e00-45c8-9af4-191ecef1a2ff/manager/0.log" Jan 20 05:07:13 crc kubenswrapper[4898]: I0120 05:07:13.969567 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-pvq64_c2673446-b027-4a75-b0d3-e823d7da9b4b/manager/0.log" Jan 20 05:07:14 crc kubenswrapper[4898]: I0120 05:07:14.131657 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-wh8bc_e4f2f74e-46b2-4c21-8b82-13f450218389/manager/0.log" Jan 20 05:07:14 crc kubenswrapper[4898]: I0120 05:07:14.161653 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-5bnlz_e386e470-2db0-442a-8dd1-853ffe97e0f7/manager/0.log" Jan 20 05:07:14 crc kubenswrapper[4898]: I0120 05:07:14.286763 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8542l9wk_56d05f5e-aa64-4ad6-94e0-aa14aa9317cb/manager/0.log" Jan 20 05:07:14 crc kubenswrapper[4898]: I0120 05:07:14.478918 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-84587498f9-g7d2w_037e77e5-fdab-45b4-9b61-a24279e2b615/operator/0.log" Jan 20 05:07:14 crc kubenswrapper[4898]: I0120 05:07:14.723115 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8fbkv_d7a9d525-aa60-4aa7-b12b-28c27c8fa591/registry-server/0.log" Jan 20 05:07:14 crc kubenswrapper[4898]: I0120 05:07:14.884335 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-s42q8_79a8ab34-c753-4e7d-8152-a62f7084c84e/manager/0.log" Jan 20 05:07:15 crc kubenswrapper[4898]: I0120 05:07:15.027948 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-b4bkc_b7b63269-33e7-4ef9-bf03-e37aac59ce07/manager/0.log" Jan 20 05:07:15 crc kubenswrapper[4898]: I0120 05:07:15.131175 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zjksr_0985373f-27d2-41cb-ba87-5d5845588c6b/operator/0.log" Jan 20 05:07:15 crc kubenswrapper[4898]: I0120 05:07:15.188111 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cbf4594b6-vxpqs_6071d625-ea99-445e-a23c-31cf9e37b1f6/manager/0.log" Jan 20 05:07:15 crc kubenswrapper[4898]: I0120 05:07:15.334493 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-9wjw5_ed13994e-012c-4775-9bac-c35117a1630b/manager/0.log" Jan 20 05:07:15 crc kubenswrapper[4898]: I0120 05:07:15.421175 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-jfvrm_dae6293c-0cae-4aff-a936-85ed72377a31/manager/0.log" Jan 20 05:07:15 crc kubenswrapper[4898]: I0120 05:07:15.517816 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-74nht_e40924a1-c172-4372-8adb-3919447c7207/manager/0.log" Jan 20 05:07:15 crc kubenswrapper[4898]: I0120 05:07:15.605688 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-56hrd_0c3a8459-142e-4e4d-8546-220b3feec6ec/manager/0.log" Jan 20 05:07:16 crc kubenswrapper[4898]: I0120 05:07:16.721363 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:07:16 crc kubenswrapper[4898]: E0120 05:07:16.722317 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.655074 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sncxf"] Jan 20 05:07:25 crc kubenswrapper[4898]: E0120 05:07:25.656301 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bfa27c-9dea-4072-a25a-928fda40149b" containerName="container-00" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.656323 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bfa27c-9dea-4072-a25a-928fda40149b" containerName="container-00" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.656658 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bfa27c-9dea-4072-a25a-928fda40149b" containerName="container-00" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.658950 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.669570 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sncxf"] Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.699978 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-utilities\") pod \"redhat-marketplace-sncxf\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.700035 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4cm\" (UniqueName: \"kubernetes.io/projected/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-kube-api-access-7w4cm\") pod \"redhat-marketplace-sncxf\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.700235 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-catalog-content\") pod \"redhat-marketplace-sncxf\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.802664 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-utilities\") pod \"redhat-marketplace-sncxf\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.802725 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4cm\" (UniqueName: \"kubernetes.io/projected/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-kube-api-access-7w4cm\") pod \"redhat-marketplace-sncxf\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.802860 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-catalog-content\") pod \"redhat-marketplace-sncxf\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.803415 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-catalog-content\") pod \"redhat-marketplace-sncxf\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.803519 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-utilities\") pod \"redhat-marketplace-sncxf\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.822243 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4cm\" (UniqueName: \"kubernetes.io/projected/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-kube-api-access-7w4cm\") pod \"redhat-marketplace-sncxf\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.851281 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9gmpv"] Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.853606 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.862829 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gmpv"] Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.904888 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-catalog-content\") pod \"redhat-operators-9gmpv\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.905375 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-utilities\") pod \"redhat-operators-9gmpv\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:25 crc kubenswrapper[4898]: I0120 05:07:25.905418 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh454\" (UniqueName: \"kubernetes.io/projected/74c2193b-2224-4618-9c62-4371508a17e4-kube-api-access-gh454\") pod \"redhat-operators-9gmpv\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:26 crc kubenswrapper[4898]: I0120 05:07:26.006577 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-utilities\") pod \"redhat-operators-9gmpv\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:26 crc kubenswrapper[4898]: I0120 05:07:26.006618 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh454\" (UniqueName: \"kubernetes.io/projected/74c2193b-2224-4618-9c62-4371508a17e4-kube-api-access-gh454\") pod \"redhat-operators-9gmpv\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:26 crc kubenswrapper[4898]: I0120 05:07:26.006690 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-catalog-content\") pod \"redhat-operators-9gmpv\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:26 crc kubenswrapper[4898]: I0120 05:07:26.007114 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-utilities\") pod \"redhat-operators-9gmpv\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:26 crc kubenswrapper[4898]: I0120 05:07:26.007195 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-catalog-content\") pod \"redhat-operators-9gmpv\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:26 crc kubenswrapper[4898]: I0120 05:07:26.014493 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:26 crc kubenswrapper[4898]: I0120 05:07:26.039902 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh454\" (UniqueName: \"kubernetes.io/projected/74c2193b-2224-4618-9c62-4371508a17e4-kube-api-access-gh454\") pod \"redhat-operators-9gmpv\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:26 crc kubenswrapper[4898]: I0120 05:07:26.191566 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:26 crc kubenswrapper[4898]: I0120 05:07:26.485700 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sncxf"] Jan 20 05:07:26 crc kubenswrapper[4898]: I0120 05:07:26.674683 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gmpv"] Jan 20 05:07:26 crc kubenswrapper[4898]: W0120 05:07:26.675332 4898 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74c2193b_2224_4618_9c62_4371508a17e4.slice/crio-e78f4a0406e7e966db90c1d7c6a78ab4f660648707a058075bfc53533813e6ac WatchSource:0}: Error finding container e78f4a0406e7e966db90c1d7c6a78ab4f660648707a058075bfc53533813e6ac: Status 404 returned error can't find the container with id e78f4a0406e7e966db90c1d7c6a78ab4f660648707a058075bfc53533813e6ac Jan 20 05:07:27 crc kubenswrapper[4898]: I0120 05:07:27.338902 4898 generic.go:334] "Generic (PLEG): container finished" podID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" containerID="25b9a3b8ea99f54fdc0a148b1595de72f7061c3f75e44dcdd7f93b42ac344267" exitCode=0 Jan 20 05:07:27 crc kubenswrapper[4898]: I0120 05:07:27.338972 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sncxf" event={"ID":"030776dc-2a34-4607-8a44-b6b6e1f3eb8b","Type":"ContainerDied","Data":"25b9a3b8ea99f54fdc0a148b1595de72f7061c3f75e44dcdd7f93b42ac344267"} Jan 20 05:07:27 crc kubenswrapper[4898]: I0120 05:07:27.339224 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sncxf" event={"ID":"030776dc-2a34-4607-8a44-b6b6e1f3eb8b","Type":"ContainerStarted","Data":"dafe6e92371b383f1dd44ddc1959879a581bd9d6bd85b6092823d9f6ef9afb8b"} Jan 20 05:07:27 crc kubenswrapper[4898]: I0120 05:07:27.341371 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 05:07:27 crc kubenswrapper[4898]: I0120 05:07:27.341686 4898 generic.go:334] "Generic (PLEG): container finished" podID="74c2193b-2224-4618-9c62-4371508a17e4" containerID="e9b84f2bcda58b38e3dd64b30516f524de372c5caa383a8daa8aae052c5378d4" exitCode=0 Jan 20 05:07:27 crc kubenswrapper[4898]: I0120 05:07:27.341714 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gmpv" event={"ID":"74c2193b-2224-4618-9c62-4371508a17e4","Type":"ContainerDied","Data":"e9b84f2bcda58b38e3dd64b30516f524de372c5caa383a8daa8aae052c5378d4"} Jan 20 05:07:27 crc kubenswrapper[4898]: I0120 05:07:27.341731 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gmpv" event={"ID":"74c2193b-2224-4618-9c62-4371508a17e4","Type":"ContainerStarted","Data":"e78f4a0406e7e966db90c1d7c6a78ab4f660648707a058075bfc53533813e6ac"} Jan 20 05:07:28 crc kubenswrapper[4898]: I0120 05:07:28.350938 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sncxf" event={"ID":"030776dc-2a34-4607-8a44-b6b6e1f3eb8b","Type":"ContainerStarted","Data":"ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef"} Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.247799 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gsmxb"] Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.249990 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.260956 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gsmxb"] Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.359729 4898 generic.go:334] "Generic (PLEG): container finished" podID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" containerID="ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef" exitCode=0 Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.359839 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sncxf" event={"ID":"030776dc-2a34-4607-8a44-b6b6e1f3eb8b","Type":"ContainerDied","Data":"ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef"} Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.362678 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gmpv" event={"ID":"74c2193b-2224-4618-9c62-4371508a17e4","Type":"ContainerStarted","Data":"f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52"} Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.374687 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-catalog-content\") pod \"certified-operators-gsmxb\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.374797 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzm5x\" (UniqueName: \"kubernetes.io/projected/070ff34c-43d3-489b-b838-f15ac96e0615-kube-api-access-vzm5x\") pod \"certified-operators-gsmxb\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.374927 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-utilities\") pod \"certified-operators-gsmxb\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.477299 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-catalog-content\") pod \"certified-operators-gsmxb\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.477356 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzm5x\" (UniqueName: \"kubernetes.io/projected/070ff34c-43d3-489b-b838-f15ac96e0615-kube-api-access-vzm5x\") pod \"certified-operators-gsmxb\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.477418 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-utilities\") pod \"certified-operators-gsmxb\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.478581 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-utilities\") pod \"certified-operators-gsmxb\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.478926 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-catalog-content\") pod \"certified-operators-gsmxb\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.510876 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzm5x\" (UniqueName: \"kubernetes.io/projected/070ff34c-43d3-489b-b838-f15ac96e0615-kube-api-access-vzm5x\") pod \"certified-operators-gsmxb\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.569525 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:29 crc kubenswrapper[4898]: I0120 05:07:29.721057 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:07:29 crc kubenswrapper[4898]: E0120 05:07:29.721693 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:07:30 crc kubenswrapper[4898]: I0120 05:07:30.130963 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gsmxb"] Jan 20 05:07:30 crc kubenswrapper[4898]: I0120 05:07:30.377411 4898 generic.go:334] "Generic (PLEG): container finished" podID="74c2193b-2224-4618-9c62-4371508a17e4" containerID="f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52" exitCode=0 Jan 20 05:07:30 crc kubenswrapper[4898]: I0120 05:07:30.377541 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gmpv" event={"ID":"74c2193b-2224-4618-9c62-4371508a17e4","Type":"ContainerDied","Data":"f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52"} Jan 20 05:07:31 crc kubenswrapper[4898]: I0120 05:07:31.388903 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sncxf" event={"ID":"030776dc-2a34-4607-8a44-b6b6e1f3eb8b","Type":"ContainerStarted","Data":"07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f"} Jan 20 05:07:31 crc kubenswrapper[4898]: I0120 05:07:31.390828 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gsmxb" event={"ID":"070ff34c-43d3-489b-b838-f15ac96e0615","Type":"ContainerStarted","Data":"f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f"} Jan 20 05:07:31 crc kubenswrapper[4898]: I0120 05:07:31.390881 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gsmxb" event={"ID":"070ff34c-43d3-489b-b838-f15ac96e0615","Type":"ContainerStarted","Data":"fcee87a8cb4813629894bcab642c05f983e2088815d4f9f6def2c0018b09d3af"} Jan 20 05:07:31 crc kubenswrapper[4898]: I0120 05:07:31.406815 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sncxf" podStartSLOduration=2.891552394 podStartE2EDuration="6.40679491s" podCreationTimestamp="2026-01-20 05:07:25 +0000 UTC" firstStartedPulling="2026-01-20 05:07:27.341133004 +0000 UTC m=+4693.940920863" lastFinishedPulling="2026-01-20 05:07:30.85637552 +0000 UTC m=+4697.456163379" observedRunningTime="2026-01-20 05:07:31.406735529 +0000 UTC m=+4698.006523398" watchObservedRunningTime="2026-01-20 05:07:31.40679491 +0000 UTC m=+4698.006582769" Jan 20 05:07:32 crc kubenswrapper[4898]: I0120 05:07:32.401933 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gmpv" event={"ID":"74c2193b-2224-4618-9c62-4371508a17e4","Type":"ContainerStarted","Data":"1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a"} Jan 20 05:07:32 crc kubenswrapper[4898]: I0120 05:07:32.439424 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9gmpv" podStartSLOduration=3.401545524 podStartE2EDuration="7.439407942s" podCreationTimestamp="2026-01-20 05:07:25 +0000 UTC" firstStartedPulling="2026-01-20 05:07:27.342862867 +0000 UTC m=+4693.942650726" lastFinishedPulling="2026-01-20 05:07:31.380725285 +0000 UTC m=+4697.980513144" observedRunningTime="2026-01-20 05:07:32.435366987 +0000 UTC m=+4699.035154856" watchObservedRunningTime="2026-01-20 05:07:32.439407942 +0000 UTC m=+4699.039195801" Jan 20 05:07:33 crc kubenswrapper[4898]: I0120 05:07:33.411182 4898 generic.go:334] "Generic (PLEG): container finished" podID="070ff34c-43d3-489b-b838-f15ac96e0615" containerID="f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f" exitCode=0 Jan 20 05:07:33 crc kubenswrapper[4898]: I0120 05:07:33.411257 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gsmxb" event={"ID":"070ff34c-43d3-489b-b838-f15ac96e0615","Type":"ContainerDied","Data":"f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f"} Jan 20 05:07:35 crc kubenswrapper[4898]: I0120 05:07:35.430319 4898 generic.go:334] "Generic (PLEG): container finished" podID="070ff34c-43d3-489b-b838-f15ac96e0615" containerID="8269f2d8e34ae019ebfe9c6fd2aeb39a6bcd9433b684458b939113a87da8a86f" exitCode=0 Jan 20 05:07:35 crc kubenswrapper[4898]: I0120 05:07:35.430523 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gsmxb" event={"ID":"070ff34c-43d3-489b-b838-f15ac96e0615","Type":"ContainerDied","Data":"8269f2d8e34ae019ebfe9c6fd2aeb39a6bcd9433b684458b939113a87da8a86f"} Jan 20 05:07:35 crc kubenswrapper[4898]: I0120 05:07:35.754921 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zvkxk_4360d6c0-d5f1-49ae-917b-86560151e7ff/control-plane-machine-set-operator/0.log" Jan 20 05:07:35 crc kubenswrapper[4898]: I0120 05:07:35.906932 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-n8lsh_ee0cebe3-90ce-4443-8c95-4ac23ed2b98c/kube-rbac-proxy/0.log" Jan 20 05:07:35 crc kubenswrapper[4898]: I0120 05:07:35.973484 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-n8lsh_ee0cebe3-90ce-4443-8c95-4ac23ed2b98c/machine-api-operator/0.log" Jan 20 05:07:36 crc kubenswrapper[4898]: I0120 05:07:36.015136 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:36 crc kubenswrapper[4898]: I0120 05:07:36.015194 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:36 crc kubenswrapper[4898]: I0120 05:07:36.069801 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:36 crc kubenswrapper[4898]: I0120 05:07:36.193432 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:36 crc kubenswrapper[4898]: I0120 05:07:36.193510 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:36 crc kubenswrapper[4898]: I0120 05:07:36.440586 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gsmxb" event={"ID":"070ff34c-43d3-489b-b838-f15ac96e0615","Type":"ContainerStarted","Data":"58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f"} Jan 20 05:07:36 crc kubenswrapper[4898]: I0120 05:07:36.462692 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gsmxb" podStartSLOduration=4.608228192 podStartE2EDuration="7.462670289s" podCreationTimestamp="2026-01-20 05:07:29 +0000 UTC" firstStartedPulling="2026-01-20 05:07:33.413865868 +0000 UTC m=+4700.013653727" lastFinishedPulling="2026-01-20 05:07:36.268307965 +0000 UTC m=+4702.868095824" observedRunningTime="2026-01-20 05:07:36.456117936 +0000 UTC m=+4703.055905795" watchObservedRunningTime="2026-01-20 05:07:36.462670289 +0000 UTC m=+4703.062458148" Jan 20 05:07:36 crc kubenswrapper[4898]: I0120 05:07:36.489426 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:37 crc kubenswrapper[4898]: I0120 05:07:37.240930 4898 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9gmpv" podUID="74c2193b-2224-4618-9c62-4371508a17e4" containerName="registry-server" probeResult="failure" output=< Jan 20 05:07:37 crc kubenswrapper[4898]: timeout: failed to connect service ":50051" within 1s Jan 20 05:07:37 crc kubenswrapper[4898]: > Jan 20 05:07:38 crc kubenswrapper[4898]: I0120 05:07:38.440882 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sncxf"] Jan 20 05:07:38 crc kubenswrapper[4898]: I0120 05:07:38.463239 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sncxf" podUID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" containerName="registry-server" containerID="cri-o://07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f" gracePeriod=2 Jan 20 05:07:38 crc kubenswrapper[4898]: I0120 05:07:38.954324 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.017472 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w4cm\" (UniqueName: \"kubernetes.io/projected/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-kube-api-access-7w4cm\") pod \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.017690 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-catalog-content\") pod \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.017712 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-utilities\") pod \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\" (UID: \"030776dc-2a34-4607-8a44-b6b6e1f3eb8b\") " Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.018257 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-utilities" (OuterVolumeSpecName: "utilities") pod "030776dc-2a34-4607-8a44-b6b6e1f3eb8b" (UID: "030776dc-2a34-4607-8a44-b6b6e1f3eb8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.024530 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-kube-api-access-7w4cm" (OuterVolumeSpecName: "kube-api-access-7w4cm") pod "030776dc-2a34-4607-8a44-b6b6e1f3eb8b" (UID: "030776dc-2a34-4607-8a44-b6b6e1f3eb8b"). InnerVolumeSpecName "kube-api-access-7w4cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.027348 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w4cm\" (UniqueName: \"kubernetes.io/projected/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-kube-api-access-7w4cm\") on node \"crc\" DevicePath \"\"" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.027375 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.036381 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "030776dc-2a34-4607-8a44-b6b6e1f3eb8b" (UID: "030776dc-2a34-4607-8a44-b6b6e1f3eb8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.128572 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030776dc-2a34-4607-8a44-b6b6e1f3eb8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.474940 4898 generic.go:334] "Generic (PLEG): container finished" podID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" containerID="07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f" exitCode=0 Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.474982 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sncxf" event={"ID":"030776dc-2a34-4607-8a44-b6b6e1f3eb8b","Type":"ContainerDied","Data":"07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f"} Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.475011 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sncxf" event={"ID":"030776dc-2a34-4607-8a44-b6b6e1f3eb8b","Type":"ContainerDied","Data":"dafe6e92371b383f1dd44ddc1959879a581bd9d6bd85b6092823d9f6ef9afb8b"} Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.475030 4898 scope.go:117] "RemoveContainer" containerID="07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.475162 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sncxf" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.497845 4898 scope.go:117] "RemoveContainer" containerID="ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.514414 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sncxf"] Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.522214 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sncxf"] Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.570734 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.570794 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:39 crc kubenswrapper[4898]: I0120 05:07:39.736151 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" path="/var/lib/kubelet/pods/030776dc-2a34-4607-8a44-b6b6e1f3eb8b/volumes" Jan 20 05:07:40 crc kubenswrapper[4898]: I0120 05:07:40.024063 4898 scope.go:117] "RemoveContainer" containerID="25b9a3b8ea99f54fdc0a148b1595de72f7061c3f75e44dcdd7f93b42ac344267" Jan 20 05:07:40 crc kubenswrapper[4898]: I0120 05:07:40.074253 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:40 crc kubenswrapper[4898]: I0120 05:07:40.074695 4898 scope.go:117] "RemoveContainer" containerID="07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f" Jan 20 05:07:40 crc kubenswrapper[4898]: E0120 05:07:40.075281 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f\": container with ID starting with 07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f not found: ID does not exist" containerID="07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f" Jan 20 05:07:40 crc kubenswrapper[4898]: I0120 05:07:40.075341 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f"} err="failed to get container status \"07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f\": rpc error: code = NotFound desc = could not find container \"07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f\": container with ID starting with 07ae1604061c73e9e5f421d2b58d54357991d55bea433926fea048e736c0366f not found: ID does not exist" Jan 20 05:07:40 crc kubenswrapper[4898]: I0120 05:07:40.075378 4898 scope.go:117] "RemoveContainer" containerID="ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef" Jan 20 05:07:40 crc kubenswrapper[4898]: E0120 05:07:40.075892 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef\": container with ID starting with ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef not found: ID does not exist" containerID="ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef" Jan 20 05:07:40 crc kubenswrapper[4898]: I0120 05:07:40.075933 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef"} err="failed to get container status \"ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef\": rpc error: code = NotFound desc = could not find container \"ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef\": container with ID starting with ab943717c236177bd2289a07fd7271b3161389b2bd3b1511cd0afc6d779753ef not found: ID does not exist" Jan 20 05:07:40 crc kubenswrapper[4898]: I0120 05:07:40.075961 4898 scope.go:117] "RemoveContainer" containerID="25b9a3b8ea99f54fdc0a148b1595de72f7061c3f75e44dcdd7f93b42ac344267" Jan 20 05:07:40 crc kubenswrapper[4898]: E0120 05:07:40.076364 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25b9a3b8ea99f54fdc0a148b1595de72f7061c3f75e44dcdd7f93b42ac344267\": container with ID starting with 25b9a3b8ea99f54fdc0a148b1595de72f7061c3f75e44dcdd7f93b42ac344267 not found: ID does not exist" containerID="25b9a3b8ea99f54fdc0a148b1595de72f7061c3f75e44dcdd7f93b42ac344267" Jan 20 05:07:40 crc kubenswrapper[4898]: I0120 05:07:40.076392 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25b9a3b8ea99f54fdc0a148b1595de72f7061c3f75e44dcdd7f93b42ac344267"} err="failed to get container status \"25b9a3b8ea99f54fdc0a148b1595de72f7061c3f75e44dcdd7f93b42ac344267\": rpc error: code = NotFound desc = could not find container \"25b9a3b8ea99f54fdc0a148b1595de72f7061c3f75e44dcdd7f93b42ac344267\": container with ID starting with 25b9a3b8ea99f54fdc0a148b1595de72f7061c3f75e44dcdd7f93b42ac344267 not found: ID does not exist" Jan 20 05:07:41 crc kubenswrapper[4898]: I0120 05:07:41.721548 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:07:41 crc kubenswrapper[4898]: E0120 05:07:41.722039 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:07:46 crc kubenswrapper[4898]: I0120 05:07:46.232206 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:46 crc kubenswrapper[4898]: I0120 05:07:46.289023 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:46 crc kubenswrapper[4898]: I0120 05:07:46.477048 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gmpv"] Jan 20 05:07:47 crc kubenswrapper[4898]: I0120 05:07:47.561282 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9gmpv" podUID="74c2193b-2224-4618-9c62-4371508a17e4" containerName="registry-server" containerID="cri-o://1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a" gracePeriod=2 Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.527457 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.601382 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh454\" (UniqueName: \"kubernetes.io/projected/74c2193b-2224-4618-9c62-4371508a17e4-kube-api-access-gh454\") pod \"74c2193b-2224-4618-9c62-4371508a17e4\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.601450 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-catalog-content\") pod \"74c2193b-2224-4618-9c62-4371508a17e4\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.601497 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-utilities\") pod \"74c2193b-2224-4618-9c62-4371508a17e4\" (UID: \"74c2193b-2224-4618-9c62-4371508a17e4\") " Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.602549 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-utilities" (OuterVolumeSpecName: "utilities") pod "74c2193b-2224-4618-9c62-4371508a17e4" (UID: "74c2193b-2224-4618-9c62-4371508a17e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.603596 4898 generic.go:334] "Generic (PLEG): container finished" podID="74c2193b-2224-4618-9c62-4371508a17e4" containerID="1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a" exitCode=0 Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.603626 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gmpv" event={"ID":"74c2193b-2224-4618-9c62-4371508a17e4","Type":"ContainerDied","Data":"1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a"} Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.603648 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gmpv" event={"ID":"74c2193b-2224-4618-9c62-4371508a17e4","Type":"ContainerDied","Data":"e78f4a0406e7e966db90c1d7c6a78ab4f660648707a058075bfc53533813e6ac"} Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.603664 4898 scope.go:117] "RemoveContainer" containerID="1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.603771 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gmpv" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.609678 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c2193b-2224-4618-9c62-4371508a17e4-kube-api-access-gh454" (OuterVolumeSpecName: "kube-api-access-gh454") pod "74c2193b-2224-4618-9c62-4371508a17e4" (UID: "74c2193b-2224-4618-9c62-4371508a17e4"). InnerVolumeSpecName "kube-api-access-gh454". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.662537 4898 scope.go:117] "RemoveContainer" containerID="f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.712649 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh454\" (UniqueName: \"kubernetes.io/projected/74c2193b-2224-4618-9c62-4371508a17e4-kube-api-access-gh454\") on node \"crc\" DevicePath \"\"" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.712676 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.716383 4898 scope.go:117] "RemoveContainer" containerID="e9b84f2bcda58b38e3dd64b30516f524de372c5caa383a8daa8aae052c5378d4" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.749571 4898 scope.go:117] "RemoveContainer" containerID="1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.752485 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74c2193b-2224-4618-9c62-4371508a17e4" (UID: "74c2193b-2224-4618-9c62-4371508a17e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 05:07:48 crc kubenswrapper[4898]: E0120 05:07:48.760610 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a\": container with ID starting with 1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a not found: ID does not exist" containerID="1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.760652 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a"} err="failed to get container status \"1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a\": rpc error: code = NotFound desc = could not find container \"1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a\": container with ID starting with 1a88bc0af2a89848dc196b51e4a525dfc4608ebe7ce068feb7eb5c36340fc98a not found: ID does not exist" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.760677 4898 scope.go:117] "RemoveContainer" containerID="f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52" Jan 20 05:07:48 crc kubenswrapper[4898]: E0120 05:07:48.761172 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52\": container with ID starting with f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52 not found: ID does not exist" containerID="f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.761198 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52"} err="failed to get container status \"f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52\": rpc error: code = NotFound desc = could not find container \"f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52\": container with ID starting with f921b5ba897d99c9dd0edaf2a4380c1570f00b1ad70a454dcf5c0db4d8b8dc52 not found: ID does not exist" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.761214 4898 scope.go:117] "RemoveContainer" containerID="e9b84f2bcda58b38e3dd64b30516f524de372c5caa383a8daa8aae052c5378d4" Jan 20 05:07:48 crc kubenswrapper[4898]: E0120 05:07:48.761603 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9b84f2bcda58b38e3dd64b30516f524de372c5caa383a8daa8aae052c5378d4\": container with ID starting with e9b84f2bcda58b38e3dd64b30516f524de372c5caa383a8daa8aae052c5378d4 not found: ID does not exist" containerID="e9b84f2bcda58b38e3dd64b30516f524de372c5caa383a8daa8aae052c5378d4" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.761623 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9b84f2bcda58b38e3dd64b30516f524de372c5caa383a8daa8aae052c5378d4"} err="failed to get container status \"e9b84f2bcda58b38e3dd64b30516f524de372c5caa383a8daa8aae052c5378d4\": rpc error: code = NotFound desc = could not find container \"e9b84f2bcda58b38e3dd64b30516f524de372c5caa383a8daa8aae052c5378d4\": container with ID starting with e9b84f2bcda58b38e3dd64b30516f524de372c5caa383a8daa8aae052c5378d4 not found: ID does not exist" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.814835 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74c2193b-2224-4618-9c62-4371508a17e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.930951 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gmpv"] Jan 20 05:07:48 crc kubenswrapper[4898]: I0120 05:07:48.936986 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9gmpv"] Jan 20 05:07:49 crc kubenswrapper[4898]: I0120 05:07:49.624422 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:49 crc kubenswrapper[4898]: I0120 05:07:49.730728 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74c2193b-2224-4618-9c62-4371508a17e4" path="/var/lib/kubelet/pods/74c2193b-2224-4618-9c62-4371508a17e4/volumes" Jan 20 05:07:49 crc kubenswrapper[4898]: I0120 05:07:49.796457 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-5s8hc_e19b167c-d354-4ecd-b5d6-f9c233efde6a/cert-manager-controller/0.log" Jan 20 05:07:49 crc kubenswrapper[4898]: I0120 05:07:49.973145 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-sknqc_1889e338-774d-44ba-b369-1de424fa7abd/cert-manager-cainjector/0.log" Jan 20 05:07:50 crc kubenswrapper[4898]: I0120 05:07:50.041074 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rjgkr_1a4689db-712b-4b11-8b22-9f81fd060ac2/cert-manager-webhook/0.log" Jan 20 05:07:51 crc kubenswrapper[4898]: I0120 05:07:51.874732 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gsmxb"] Jan 20 05:07:51 crc kubenswrapper[4898]: I0120 05:07:51.875318 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gsmxb" podUID="070ff34c-43d3-489b-b838-f15ac96e0615" containerName="registry-server" containerID="cri-o://58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f" gracePeriod=2 Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.332566 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.376084 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-utilities\") pod \"070ff34c-43d3-489b-b838-f15ac96e0615\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.376240 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzm5x\" (UniqueName: \"kubernetes.io/projected/070ff34c-43d3-489b-b838-f15ac96e0615-kube-api-access-vzm5x\") pod \"070ff34c-43d3-489b-b838-f15ac96e0615\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.376312 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-catalog-content\") pod \"070ff34c-43d3-489b-b838-f15ac96e0615\" (UID: \"070ff34c-43d3-489b-b838-f15ac96e0615\") " Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.377127 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-utilities" (OuterVolumeSpecName: "utilities") pod "070ff34c-43d3-489b-b838-f15ac96e0615" (UID: "070ff34c-43d3-489b-b838-f15ac96e0615"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.389653 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070ff34c-43d3-489b-b838-f15ac96e0615-kube-api-access-vzm5x" (OuterVolumeSpecName: "kube-api-access-vzm5x") pod "070ff34c-43d3-489b-b838-f15ac96e0615" (UID: "070ff34c-43d3-489b-b838-f15ac96e0615"). InnerVolumeSpecName "kube-api-access-vzm5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.428308 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "070ff34c-43d3-489b-b838-f15ac96e0615" (UID: "070ff34c-43d3-489b-b838-f15ac96e0615"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.478396 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.478422 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzm5x\" (UniqueName: \"kubernetes.io/projected/070ff34c-43d3-489b-b838-f15ac96e0615-kube-api-access-vzm5x\") on node \"crc\" DevicePath \"\"" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.478473 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070ff34c-43d3-489b-b838-f15ac96e0615-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.637279 4898 generic.go:334] "Generic (PLEG): container finished" podID="070ff34c-43d3-489b-b838-f15ac96e0615" containerID="58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f" exitCode=0 Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.637323 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gsmxb" event={"ID":"070ff34c-43d3-489b-b838-f15ac96e0615","Type":"ContainerDied","Data":"58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f"} Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.637347 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gsmxb" event={"ID":"070ff34c-43d3-489b-b838-f15ac96e0615","Type":"ContainerDied","Data":"fcee87a8cb4813629894bcab642c05f983e2088815d4f9f6def2c0018b09d3af"} Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.637363 4898 scope.go:117] "RemoveContainer" containerID="58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.637496 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gsmxb" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.669115 4898 scope.go:117] "RemoveContainer" containerID="8269f2d8e34ae019ebfe9c6fd2aeb39a6bcd9433b684458b939113a87da8a86f" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.672333 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gsmxb"] Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.679302 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gsmxb"] Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.689657 4898 scope.go:117] "RemoveContainer" containerID="f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.724497 4898 scope.go:117] "RemoveContainer" containerID="58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f" Jan 20 05:07:52 crc kubenswrapper[4898]: E0120 05:07:52.724930 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f\": container with ID starting with 58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f not found: ID does not exist" containerID="58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.724971 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f"} err="failed to get container status \"58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f\": rpc error: code = NotFound desc = could not find container \"58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f\": container with ID starting with 58f3b5f594a7c412fd12ecf9a1856902dffee2adda308ff8e5b6971864e1e67f not found: ID does not exist" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.724996 4898 scope.go:117] "RemoveContainer" containerID="8269f2d8e34ae019ebfe9c6fd2aeb39a6bcd9433b684458b939113a87da8a86f" Jan 20 05:07:52 crc kubenswrapper[4898]: E0120 05:07:52.726282 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8269f2d8e34ae019ebfe9c6fd2aeb39a6bcd9433b684458b939113a87da8a86f\": container with ID starting with 8269f2d8e34ae019ebfe9c6fd2aeb39a6bcd9433b684458b939113a87da8a86f not found: ID does not exist" containerID="8269f2d8e34ae019ebfe9c6fd2aeb39a6bcd9433b684458b939113a87da8a86f" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.726563 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8269f2d8e34ae019ebfe9c6fd2aeb39a6bcd9433b684458b939113a87da8a86f"} err="failed to get container status \"8269f2d8e34ae019ebfe9c6fd2aeb39a6bcd9433b684458b939113a87da8a86f\": rpc error: code = NotFound desc = could not find container \"8269f2d8e34ae019ebfe9c6fd2aeb39a6bcd9433b684458b939113a87da8a86f\": container with ID starting with 8269f2d8e34ae019ebfe9c6fd2aeb39a6bcd9433b684458b939113a87da8a86f not found: ID does not exist" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.726586 4898 scope.go:117] "RemoveContainer" containerID="f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f" Jan 20 05:07:52 crc kubenswrapper[4898]: E0120 05:07:52.726814 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f\": container with ID starting with f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f not found: ID does not exist" containerID="f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f" Jan 20 05:07:52 crc kubenswrapper[4898]: I0120 05:07:52.726838 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f"} err="failed to get container status \"f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f\": rpc error: code = NotFound desc = could not find container \"f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f\": container with ID starting with f39edab04370123615a2f2259d2a3dd306f19cc41dc3dfad3741cbda5c4f071f not found: ID does not exist" Jan 20 05:07:53 crc kubenswrapper[4898]: I0120 05:07:53.734478 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070ff34c-43d3-489b-b838-f15ac96e0615" path="/var/lib/kubelet/pods/070ff34c-43d3-489b-b838-f15ac96e0615/volumes" Jan 20 05:07:55 crc kubenswrapper[4898]: I0120 05:07:55.722146 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:07:55 crc kubenswrapper[4898]: E0120 05:07:55.723549 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:08:03 crc kubenswrapper[4898]: I0120 05:08:03.569507 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-bhn64_b61979e3-553f-4098-a721-419fdc230e8b/nmstate-console-plugin/0.log" Jan 20 05:08:03 crc kubenswrapper[4898]: I0120 05:08:03.741071 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-67l7k_ed4eda94-be5e-496a-922e-96edad89ca92/nmstate-handler/0.log" Jan 20 05:08:03 crc kubenswrapper[4898]: I0120 05:08:03.767604 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6zv69_2575ee56-4994-4cc5-b686-9974bc3ba295/kube-rbac-proxy/0.log" Jan 20 05:08:03 crc kubenswrapper[4898]: I0120 05:08:03.810148 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6zv69_2575ee56-4994-4cc5-b686-9974bc3ba295/nmstate-metrics/0.log" Jan 20 05:08:03 crc kubenswrapper[4898]: I0120 05:08:03.941325 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-plsmq_268b5e9a-7692-44e0-989a-bbbeeaee9d51/nmstate-operator/0.log" Jan 20 05:08:04 crc kubenswrapper[4898]: I0120 05:08:04.002911 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-2llzh_f207f14d-bc52-4b04-b325-05ccc1b4351a/nmstate-webhook/0.log" Jan 20 05:08:08 crc kubenswrapper[4898]: I0120 05:08:08.721906 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:08:08 crc kubenswrapper[4898]: E0120 05:08:08.722716 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:08:19 crc kubenswrapper[4898]: I0120 05:08:19.623958 4898 scope.go:117] "RemoveContainer" containerID="31e532d8fd148325fa19c7e64acb94bc9400caac9c71fabfe0d9ea4f9ac99bf0" Jan 20 05:08:19 crc kubenswrapper[4898]: I0120 05:08:19.648950 4898 scope.go:117] "RemoveContainer" containerID="17e97204aae5bc6638d5352c0afab41c4559246957d0a50d451b6957fb3b9601" Jan 20 05:08:19 crc kubenswrapper[4898]: I0120 05:08:19.669485 4898 scope.go:117] "RemoveContainer" containerID="ead2b8898861cde82bdf1048cd5f20fbb051ca03e594206633f2edce1373e6ad" Jan 20 05:08:19 crc kubenswrapper[4898]: I0120 05:08:19.722227 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:08:19 crc kubenswrapper[4898]: E0120 05:08:19.722446 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:08:31 crc kubenswrapper[4898]: I0120 05:08:31.605707 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5zgx8_1b647c19-565c-4041-980f-2455d029079c/kube-rbac-proxy/0.log" Jan 20 05:08:31 crc kubenswrapper[4898]: I0120 05:08:31.721028 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:08:31 crc kubenswrapper[4898]: E0120 05:08:31.721347 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:08:31 crc kubenswrapper[4898]: I0120 05:08:31.727968 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5zgx8_1b647c19-565c-4041-980f-2455d029079c/controller/0.log" Jan 20 05:08:32 crc kubenswrapper[4898]: I0120 05:08:32.422589 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-frr-files/0.log" Jan 20 05:08:32 crc kubenswrapper[4898]: I0120 05:08:32.593238 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-frr-files/0.log" Jan 20 05:08:32 crc kubenswrapper[4898]: I0120 05:08:32.625164 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-reloader/0.log" Jan 20 05:08:32 crc kubenswrapper[4898]: I0120 05:08:32.625893 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-metrics/0.log" Jan 20 05:08:32 crc kubenswrapper[4898]: I0120 05:08:32.642126 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-reloader/0.log" Jan 20 05:08:32 crc kubenswrapper[4898]: I0120 05:08:32.818100 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-metrics/0.log" Jan 20 05:08:32 crc kubenswrapper[4898]: I0120 05:08:32.834090 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-frr-files/0.log" Jan 20 05:08:32 crc kubenswrapper[4898]: I0120 05:08:32.834534 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-metrics/0.log" Jan 20 05:08:32 crc kubenswrapper[4898]: I0120 05:08:32.857516 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-reloader/0.log" Jan 20 05:08:32 crc kubenswrapper[4898]: I0120 05:08:32.977948 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-frr-files/0.log" Jan 20 05:08:32 crc kubenswrapper[4898]: I0120 05:08:32.979026 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-reloader/0.log" Jan 20 05:08:33 crc kubenswrapper[4898]: I0120 05:08:33.031859 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/controller/0.log" Jan 20 05:08:33 crc kubenswrapper[4898]: I0120 05:08:33.040061 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/cp-metrics/0.log" Jan 20 05:08:33 crc kubenswrapper[4898]: I0120 05:08:33.149876 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/frr-metrics/0.log" Jan 20 05:08:33 crc kubenswrapper[4898]: I0120 05:08:33.220411 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/kube-rbac-proxy/0.log" Jan 20 05:08:33 crc kubenswrapper[4898]: I0120 05:08:33.229773 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/kube-rbac-proxy-frr/0.log" Jan 20 05:08:33 crc kubenswrapper[4898]: I0120 05:08:33.370261 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/reloader/0.log" Jan 20 05:08:33 crc kubenswrapper[4898]: I0120 05:08:33.438509 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-g87ks_f7da4c82-f1c1-494d-8f99-ea71c542169e/frr-k8s-webhook-server/0.log" Jan 20 05:08:33 crc kubenswrapper[4898]: I0120 05:08:33.600173 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7bfbbff6f9-rppqz_ca361fa9-3501-4e45-b43a-5344a65efac5/manager/0.log" Jan 20 05:08:33 crc kubenswrapper[4898]: I0120 05:08:33.784332 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-656bd465d-j8r6j_a4436d8b-9043-4dfd-8f29-e6dd7bee46bc/webhook-server/0.log" Jan 20 05:08:33 crc kubenswrapper[4898]: I0120 05:08:33.881453 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-27wmr_beba60fc-d482-43a1-885b-b03a082a4e95/kube-rbac-proxy/0.log" Jan 20 05:08:34 crc kubenswrapper[4898]: I0120 05:08:34.410348 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-27wmr_beba60fc-d482-43a1-885b-b03a082a4e95/speaker/0.log" Jan 20 05:08:34 crc kubenswrapper[4898]: I0120 05:08:34.471752 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f9q7w_033a1a1e-99a3-4195-bd53-bf46c4e768b7/frr/0.log" Jan 20 05:08:43 crc kubenswrapper[4898]: I0120 05:08:43.731773 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:08:43 crc kubenswrapper[4898]: E0120 05:08:43.732515 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:08:46 crc kubenswrapper[4898]: I0120 05:08:46.551952 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q_fb18e186-7d5a-4bb9-b100-6f257ee07319/util/0.log" Jan 20 05:08:46 crc kubenswrapper[4898]: I0120 05:08:46.719424 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q_fb18e186-7d5a-4bb9-b100-6f257ee07319/util/0.log" Jan 20 05:08:46 crc kubenswrapper[4898]: I0120 05:08:46.731486 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q_fb18e186-7d5a-4bb9-b100-6f257ee07319/pull/0.log" Jan 20 05:08:46 crc kubenswrapper[4898]: I0120 05:08:46.755467 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q_fb18e186-7d5a-4bb9-b100-6f257ee07319/pull/0.log" Jan 20 05:08:46 crc kubenswrapper[4898]: I0120 05:08:46.925647 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q_fb18e186-7d5a-4bb9-b100-6f257ee07319/pull/0.log" Jan 20 05:08:46 crc kubenswrapper[4898]: I0120 05:08:46.950786 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q_fb18e186-7d5a-4bb9-b100-6f257ee07319/util/0.log" Jan 20 05:08:46 crc kubenswrapper[4898]: I0120 05:08:46.959798 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsm46q_fb18e186-7d5a-4bb9-b100-6f257ee07319/extract/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.069180 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl_1ebb9a61-7bd6-434c-b16d-2d08d38ef556/util/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.241146 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl_1ebb9a61-7bd6-434c-b16d-2d08d38ef556/util/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.253451 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl_1ebb9a61-7bd6-434c-b16d-2d08d38ef556/pull/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.258970 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl_1ebb9a61-7bd6-434c-b16d-2d08d38ef556/pull/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.443485 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl_1ebb9a61-7bd6-434c-b16d-2d08d38ef556/util/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.473403 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl_1ebb9a61-7bd6-434c-b16d-2d08d38ef556/extract/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.505810 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mftzl_1ebb9a61-7bd6-434c-b16d-2d08d38ef556/pull/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.609988 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pnfgk_4464047d-a0d0-4b7c-aa1f-3553b8f0f04c/extract-utilities/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.763963 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pnfgk_4464047d-a0d0-4b7c-aa1f-3553b8f0f04c/extract-content/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.778670 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pnfgk_4464047d-a0d0-4b7c-aa1f-3553b8f0f04c/extract-utilities/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.797148 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pnfgk_4464047d-a0d0-4b7c-aa1f-3553b8f0f04c/extract-content/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.946069 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pnfgk_4464047d-a0d0-4b7c-aa1f-3553b8f0f04c/extract-utilities/0.log" Jan 20 05:08:47 crc kubenswrapper[4898]: I0120 05:08:47.946099 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pnfgk_4464047d-a0d0-4b7c-aa1f-3553b8f0f04c/extract-content/0.log" Jan 20 05:08:48 crc kubenswrapper[4898]: I0120 05:08:48.142934 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nn7q7_22d509aa-3e38-4323-87fb-ff9b23c0dd2a/extract-utilities/0.log" Jan 20 05:08:48 crc kubenswrapper[4898]: I0120 05:08:48.341544 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nn7q7_22d509aa-3e38-4323-87fb-ff9b23c0dd2a/extract-content/0.log" Jan 20 05:08:48 crc kubenswrapper[4898]: I0120 05:08:48.402371 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nn7q7_22d509aa-3e38-4323-87fb-ff9b23c0dd2a/extract-utilities/0.log" Jan 20 05:08:48 crc kubenswrapper[4898]: I0120 05:08:48.422217 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nn7q7_22d509aa-3e38-4323-87fb-ff9b23c0dd2a/extract-content/0.log" Jan 20 05:08:48 crc kubenswrapper[4898]: I0120 05:08:48.497362 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pnfgk_4464047d-a0d0-4b7c-aa1f-3553b8f0f04c/registry-server/0.log" Jan 20 05:08:48 crc kubenswrapper[4898]: I0120 05:08:48.555835 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nn7q7_22d509aa-3e38-4323-87fb-ff9b23c0dd2a/extract-content/0.log" Jan 20 05:08:48 crc kubenswrapper[4898]: I0120 05:08:48.595869 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nn7q7_22d509aa-3e38-4323-87fb-ff9b23c0dd2a/extract-utilities/0.log" Jan 20 05:08:48 crc kubenswrapper[4898]: I0120 05:08:48.742718 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6zngz_4ada7bb7-b089-45d1-8314-5a3218932dfb/marketplace-operator/0.log" Jan 20 05:08:48 crc kubenswrapper[4898]: I0120 05:08:48.908691 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rmfj_4677b629-b059-4952-b816-45484f784fec/extract-utilities/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.134267 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nn7q7_22d509aa-3e38-4323-87fb-ff9b23c0dd2a/registry-server/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.139209 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rmfj_4677b629-b059-4952-b816-45484f784fec/extract-utilities/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.190161 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rmfj_4677b629-b059-4952-b816-45484f784fec/extract-content/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.206598 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rmfj_4677b629-b059-4952-b816-45484f784fec/extract-content/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.354845 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rmfj_4677b629-b059-4952-b816-45484f784fec/extract-utilities/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.386084 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rmfj_4677b629-b059-4952-b816-45484f784fec/extract-content/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.556648 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qtczp_b8875827-2900-4d96-ae50-be27e6fe41da/extract-utilities/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.565451 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rmfj_4677b629-b059-4952-b816-45484f784fec/registry-server/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.688885 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qtczp_b8875827-2900-4d96-ae50-be27e6fe41da/extract-utilities/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.690198 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qtczp_b8875827-2900-4d96-ae50-be27e6fe41da/extract-content/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.756499 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qtczp_b8875827-2900-4d96-ae50-be27e6fe41da/extract-content/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.890461 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qtczp_b8875827-2900-4d96-ae50-be27e6fe41da/extract-content/0.log" Jan 20 05:08:49 crc kubenswrapper[4898]: I0120 05:08:49.900620 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qtczp_b8875827-2900-4d96-ae50-be27e6fe41da/extract-utilities/0.log" Jan 20 05:08:50 crc kubenswrapper[4898]: I0120 05:08:50.534038 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qtczp_b8875827-2900-4d96-ae50-be27e6fe41da/registry-server/0.log" Jan 20 05:08:54 crc kubenswrapper[4898]: I0120 05:08:54.721917 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:08:54 crc kubenswrapper[4898]: E0120 05:08:54.722830 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:09:05 crc kubenswrapper[4898]: I0120 05:09:05.721831 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:09:05 crc kubenswrapper[4898]: E0120 05:09:05.722591 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:09:16 crc kubenswrapper[4898]: I0120 05:09:16.723093 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:09:16 crc kubenswrapper[4898]: E0120 05:09:16.723856 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:09:28 crc kubenswrapper[4898]: I0120 05:09:28.721995 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:09:28 crc kubenswrapper[4898]: E0120 05:09:28.722943 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:09:40 crc kubenswrapper[4898]: I0120 05:09:40.721492 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:09:40 crc kubenswrapper[4898]: E0120 05:09:40.722765 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:09:53 crc kubenswrapper[4898]: I0120 05:09:53.733101 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:09:53 crc kubenswrapper[4898]: E0120 05:09:53.734529 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:10:07 crc kubenswrapper[4898]: I0120 05:10:07.721141 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:10:07 crc kubenswrapper[4898]: E0120 05:10:07.722137 4898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cwlf6_openshift-machine-config-operator(aef68392-4b9d-4a0c-a90e-8f04051fda21)\"" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" Jan 20 05:10:21 crc kubenswrapper[4898]: I0120 05:10:21.722255 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:10:22 crc kubenswrapper[4898]: I0120 05:10:22.033319 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"d539c2a651748ca7978ba9ba3cdb5c7a5dc4715a629b42ee2ce0b79ff74f2096"} Jan 20 05:10:22 crc kubenswrapper[4898]: I0120 05:10:22.034849 4898 generic.go:334] "Generic (PLEG): container finished" podID="0bf4f276-0eff-4c1c-83e3-005dc6004446" containerID="bcf2b2e7d01cfc72aa7d8e7048ccfc9dcb4f0ffeddc656d6ee4ca65144da5302" exitCode=0 Jan 20 05:10:22 crc kubenswrapper[4898]: I0120 05:10:22.034898 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-765mp/must-gather-v7gff" event={"ID":"0bf4f276-0eff-4c1c-83e3-005dc6004446","Type":"ContainerDied","Data":"bcf2b2e7d01cfc72aa7d8e7048ccfc9dcb4f0ffeddc656d6ee4ca65144da5302"} Jan 20 05:10:22 crc kubenswrapper[4898]: I0120 05:10:22.035513 4898 scope.go:117] "RemoveContainer" containerID="bcf2b2e7d01cfc72aa7d8e7048ccfc9dcb4f0ffeddc656d6ee4ca65144da5302" Jan 20 05:10:22 crc kubenswrapper[4898]: I0120 05:10:22.537092 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-765mp_must-gather-v7gff_0bf4f276-0eff-4c1c-83e3-005dc6004446/gather/0.log" Jan 20 05:10:30 crc kubenswrapper[4898]: I0120 05:10:30.743740 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-765mp/must-gather-v7gff"] Jan 20 05:10:30 crc kubenswrapper[4898]: I0120 05:10:30.744596 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-765mp/must-gather-v7gff" podUID="0bf4f276-0eff-4c1c-83e3-005dc6004446" containerName="copy" containerID="cri-o://48d5299eb6257f76f558f3c69107be418daf1cb23eec2234770f23e73a4d5299" gracePeriod=2 Jan 20 05:10:30 crc kubenswrapper[4898]: I0120 05:10:30.759168 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-765mp/must-gather-v7gff"] Jan 20 05:10:31 crc kubenswrapper[4898]: I0120 05:10:31.105529 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-765mp_must-gather-v7gff_0bf4f276-0eff-4c1c-83e3-005dc6004446/copy/0.log" Jan 20 05:10:31 crc kubenswrapper[4898]: I0120 05:10:31.106174 4898 generic.go:334] "Generic (PLEG): container finished" podID="0bf4f276-0eff-4c1c-83e3-005dc6004446" containerID="48d5299eb6257f76f558f3c69107be418daf1cb23eec2234770f23e73a4d5299" exitCode=143 Jan 20 05:10:31 crc kubenswrapper[4898]: I0120 05:10:31.212113 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-765mp_must-gather-v7gff_0bf4f276-0eff-4c1c-83e3-005dc6004446/copy/0.log" Jan 20 05:10:31 crc kubenswrapper[4898]: I0120 05:10:31.212564 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/must-gather-v7gff" Jan 20 05:10:31 crc kubenswrapper[4898]: I0120 05:10:31.238215 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0bf4f276-0eff-4c1c-83e3-005dc6004446-must-gather-output\") pod \"0bf4f276-0eff-4c1c-83e3-005dc6004446\" (UID: \"0bf4f276-0eff-4c1c-83e3-005dc6004446\") " Jan 20 05:10:31 crc kubenswrapper[4898]: I0120 05:10:31.238397 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfv4q\" (UniqueName: \"kubernetes.io/projected/0bf4f276-0eff-4c1c-83e3-005dc6004446-kube-api-access-qfv4q\") pod \"0bf4f276-0eff-4c1c-83e3-005dc6004446\" (UID: \"0bf4f276-0eff-4c1c-83e3-005dc6004446\") " Jan 20 05:10:31 crc kubenswrapper[4898]: I0120 05:10:31.258800 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf4f276-0eff-4c1c-83e3-005dc6004446-kube-api-access-qfv4q" (OuterVolumeSpecName: "kube-api-access-qfv4q") pod "0bf4f276-0eff-4c1c-83e3-005dc6004446" (UID: "0bf4f276-0eff-4c1c-83e3-005dc6004446"). InnerVolumeSpecName "kube-api-access-qfv4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 05:10:31 crc kubenswrapper[4898]: I0120 05:10:31.341271 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfv4q\" (UniqueName: \"kubernetes.io/projected/0bf4f276-0eff-4c1c-83e3-005dc6004446-kube-api-access-qfv4q\") on node \"crc\" DevicePath \"\"" Jan 20 05:10:31 crc kubenswrapper[4898]: I0120 05:10:31.386242 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf4f276-0eff-4c1c-83e3-005dc6004446-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0bf4f276-0eff-4c1c-83e3-005dc6004446" (UID: "0bf4f276-0eff-4c1c-83e3-005dc6004446"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 05:10:31 crc kubenswrapper[4898]: I0120 05:10:31.443564 4898 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0bf4f276-0eff-4c1c-83e3-005dc6004446-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 20 05:10:31 crc kubenswrapper[4898]: I0120 05:10:31.730781 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf4f276-0eff-4c1c-83e3-005dc6004446" path="/var/lib/kubelet/pods/0bf4f276-0eff-4c1c-83e3-005dc6004446/volumes" Jan 20 05:10:32 crc kubenswrapper[4898]: I0120 05:10:32.115729 4898 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-765mp_must-gather-v7gff_0bf4f276-0eff-4c1c-83e3-005dc6004446/copy/0.log" Jan 20 05:10:32 crc kubenswrapper[4898]: I0120 05:10:32.116064 4898 scope.go:117] "RemoveContainer" containerID="48d5299eb6257f76f558f3c69107be418daf1cb23eec2234770f23e73a4d5299" Jan 20 05:10:32 crc kubenswrapper[4898]: I0120 05:10:32.116209 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-765mp/must-gather-v7gff" Jan 20 05:10:32 crc kubenswrapper[4898]: I0120 05:10:32.135011 4898 scope.go:117] "RemoveContainer" containerID="bcf2b2e7d01cfc72aa7d8e7048ccfc9dcb4f0ffeddc656d6ee4ca65144da5302" Jan 20 05:12:20 crc kubenswrapper[4898]: I0120 05:12:20.444365 4898 scope.go:117] "RemoveContainer" containerID="721b62c6a98461e309efd1422699c1dd3c79fdb2a6a6f2c8f38eb904d3c3a85b" Jan 20 05:12:39 crc kubenswrapper[4898]: I0120 05:12:39.975827 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 05:12:39 crc kubenswrapper[4898]: I0120 05:12:39.976519 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.220685 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-26wkb"] Jan 20 05:13:09 crc kubenswrapper[4898]: E0120 05:13:09.221817 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf4f276-0eff-4c1c-83e3-005dc6004446" containerName="gather" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.221841 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf4f276-0eff-4c1c-83e3-005dc6004446" containerName="gather" Jan 20 05:13:09 crc kubenswrapper[4898]: E0120 05:13:09.221856 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070ff34c-43d3-489b-b838-f15ac96e0615" containerName="extract-content" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.221866 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="070ff34c-43d3-489b-b838-f15ac96e0615" containerName="extract-content" Jan 20 05:13:09 crc kubenswrapper[4898]: E0120 05:13:09.221889 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c2193b-2224-4618-9c62-4371508a17e4" containerName="extract-content" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.221900 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c2193b-2224-4618-9c62-4371508a17e4" containerName="extract-content" Jan 20 05:13:09 crc kubenswrapper[4898]: E0120 05:13:09.221924 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" containerName="registry-server" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.221935 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" containerName="registry-server" Jan 20 05:13:09 crc kubenswrapper[4898]: E0120 05:13:09.221955 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070ff34c-43d3-489b-b838-f15ac96e0615" containerName="registry-server" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.221965 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="070ff34c-43d3-489b-b838-f15ac96e0615" containerName="registry-server" Jan 20 05:13:09 crc kubenswrapper[4898]: E0120 05:13:09.221989 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf4f276-0eff-4c1c-83e3-005dc6004446" containerName="copy" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.221999 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf4f276-0eff-4c1c-83e3-005dc6004446" containerName="copy" Jan 20 05:13:09 crc kubenswrapper[4898]: E0120 05:13:09.222022 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" containerName="extract-content" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.222031 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" containerName="extract-content" Jan 20 05:13:09 crc kubenswrapper[4898]: E0120 05:13:09.222047 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070ff34c-43d3-489b-b838-f15ac96e0615" containerName="extract-utilities" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.222079 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="070ff34c-43d3-489b-b838-f15ac96e0615" containerName="extract-utilities" Jan 20 05:13:09 crc kubenswrapper[4898]: E0120 05:13:09.222103 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c2193b-2224-4618-9c62-4371508a17e4" containerName="registry-server" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.222113 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c2193b-2224-4618-9c62-4371508a17e4" containerName="registry-server" Jan 20 05:13:09 crc kubenswrapper[4898]: E0120 05:13:09.222128 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" containerName="extract-utilities" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.222136 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" containerName="extract-utilities" Jan 20 05:13:09 crc kubenswrapper[4898]: E0120 05:13:09.222151 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c2193b-2224-4618-9c62-4371508a17e4" containerName="extract-utilities" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.222160 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c2193b-2224-4618-9c62-4371508a17e4" containerName="extract-utilities" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.222421 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf4f276-0eff-4c1c-83e3-005dc6004446" containerName="gather" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.222704 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="070ff34c-43d3-489b-b838-f15ac96e0615" containerName="registry-server" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.222883 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf4f276-0eff-4c1c-83e3-005dc6004446" containerName="copy" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.222923 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c2193b-2224-4618-9c62-4371508a17e4" containerName="registry-server" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.222946 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="030776dc-2a34-4607-8a44-b6b6e1f3eb8b" containerName="registry-server" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.225239 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.259544 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-26wkb"] Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.303484 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqgr\" (UniqueName: \"kubernetes.io/projected/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-kube-api-access-2zqgr\") pod \"community-operators-26wkb\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.303615 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-catalog-content\") pod \"community-operators-26wkb\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.303671 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-utilities\") pod \"community-operators-26wkb\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.405667 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-utilities\") pod \"community-operators-26wkb\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.405901 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zqgr\" (UniqueName: \"kubernetes.io/projected/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-kube-api-access-2zqgr\") pod \"community-operators-26wkb\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.406016 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-catalog-content\") pod \"community-operators-26wkb\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.406639 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-utilities\") pod \"community-operators-26wkb\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.406988 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-catalog-content\") pod \"community-operators-26wkb\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.429808 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zqgr\" (UniqueName: \"kubernetes.io/projected/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-kube-api-access-2zqgr\") pod \"community-operators-26wkb\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.544052 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.975379 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 05:13:09 crc kubenswrapper[4898]: I0120 05:13:09.975637 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 05:13:10 crc kubenswrapper[4898]: I0120 05:13:10.108024 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-26wkb"] Jan 20 05:13:10 crc kubenswrapper[4898]: I0120 05:13:10.603699 4898 generic.go:334] "Generic (PLEG): container finished" podID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" containerID="1664e9294758fc7c6ff2d1fb0806b929ecc9c385259898d4a7cc661f5634e48a" exitCode=0 Jan 20 05:13:10 crc kubenswrapper[4898]: I0120 05:13:10.603799 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26wkb" event={"ID":"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26","Type":"ContainerDied","Data":"1664e9294758fc7c6ff2d1fb0806b929ecc9c385259898d4a7cc661f5634e48a"} Jan 20 05:13:10 crc kubenswrapper[4898]: I0120 05:13:10.604047 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26wkb" event={"ID":"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26","Type":"ContainerStarted","Data":"69cf0f057d0a905bc8fde6311ab4228d3ae8e1c75ad23c289ce57392b2470af7"} Jan 20 05:13:10 crc kubenswrapper[4898]: I0120 05:13:10.605754 4898 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 05:13:12 crc kubenswrapper[4898]: I0120 05:13:12.637808 4898 generic.go:334] "Generic (PLEG): container finished" podID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" containerID="13c46ce28e53944b0b78327c0a008516608d7a22c85495c29c1413d9aa248bc9" exitCode=0 Jan 20 05:13:12 crc kubenswrapper[4898]: I0120 05:13:12.637847 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26wkb" event={"ID":"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26","Type":"ContainerDied","Data":"13c46ce28e53944b0b78327c0a008516608d7a22c85495c29c1413d9aa248bc9"} Jan 20 05:13:14 crc kubenswrapper[4898]: I0120 05:13:14.659707 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26wkb" event={"ID":"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26","Type":"ContainerStarted","Data":"c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d"} Jan 20 05:13:14 crc kubenswrapper[4898]: I0120 05:13:14.691738 4898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-26wkb" podStartSLOduration=2.786029014 podStartE2EDuration="5.691711995s" podCreationTimestamp="2026-01-20 05:13:09 +0000 UTC" firstStartedPulling="2026-01-20 05:13:10.60556349 +0000 UTC m=+5037.205351349" lastFinishedPulling="2026-01-20 05:13:13.511246461 +0000 UTC m=+5040.111034330" observedRunningTime="2026-01-20 05:13:14.674928187 +0000 UTC m=+5041.274716036" watchObservedRunningTime="2026-01-20 05:13:14.691711995 +0000 UTC m=+5041.291499884" Jan 20 05:13:19 crc kubenswrapper[4898]: I0120 05:13:19.544545 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:19 crc kubenswrapper[4898]: I0120 05:13:19.545225 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:19 crc kubenswrapper[4898]: I0120 05:13:19.743209 4898 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:19 crc kubenswrapper[4898]: I0120 05:13:19.801024 4898 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:19 crc kubenswrapper[4898]: I0120 05:13:19.976572 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-26wkb"] Jan 20 05:13:21 crc kubenswrapper[4898]: I0120 05:13:21.732641 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-26wkb" podUID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" containerName="registry-server" containerID="cri-o://c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d" gracePeriod=2 Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.277312 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.470583 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-utilities\") pod \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.470777 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zqgr\" (UniqueName: \"kubernetes.io/projected/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-kube-api-access-2zqgr\") pod \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.470865 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-catalog-content\") pod \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\" (UID: \"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26\") " Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.472400 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-utilities" (OuterVolumeSpecName: "utilities") pod "efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" (UID: "efcc1499-2d99-4a89-9e0f-0d30fe0e1c26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.476610 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-kube-api-access-2zqgr" (OuterVolumeSpecName: "kube-api-access-2zqgr") pod "efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" (UID: "efcc1499-2d99-4a89-9e0f-0d30fe0e1c26"). InnerVolumeSpecName "kube-api-access-2zqgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.531909 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" (UID: "efcc1499-2d99-4a89-9e0f-0d30fe0e1c26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.573008 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zqgr\" (UniqueName: \"kubernetes.io/projected/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-kube-api-access-2zqgr\") on node \"crc\" DevicePath \"\"" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.573039 4898 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.573113 4898 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.745578 4898 generic.go:334] "Generic (PLEG): container finished" podID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" containerID="c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d" exitCode=0 Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.745628 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26wkb" event={"ID":"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26","Type":"ContainerDied","Data":"c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d"} Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.745654 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26wkb" event={"ID":"efcc1499-2d99-4a89-9e0f-0d30fe0e1c26","Type":"ContainerDied","Data":"69cf0f057d0a905bc8fde6311ab4228d3ae8e1c75ad23c289ce57392b2470af7"} Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.745672 4898 scope.go:117] "RemoveContainer" containerID="c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.745785 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26wkb" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.784366 4898 scope.go:117] "RemoveContainer" containerID="13c46ce28e53944b0b78327c0a008516608d7a22c85495c29c1413d9aa248bc9" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.787001 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-26wkb"] Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.793807 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-26wkb"] Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.810222 4898 scope.go:117] "RemoveContainer" containerID="1664e9294758fc7c6ff2d1fb0806b929ecc9c385259898d4a7cc661f5634e48a" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.870061 4898 scope.go:117] "RemoveContainer" containerID="c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d" Jan 20 05:13:22 crc kubenswrapper[4898]: E0120 05:13:22.870672 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d\": container with ID starting with c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d not found: ID does not exist" containerID="c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.870792 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d"} err="failed to get container status \"c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d\": rpc error: code = NotFound desc = could not find container \"c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d\": container with ID starting with c8fdff3e20b1b614e5448da2ad6fd71b6024116c7752cc98f07f5eebc335e30d not found: ID does not exist" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.870829 4898 scope.go:117] "RemoveContainer" containerID="13c46ce28e53944b0b78327c0a008516608d7a22c85495c29c1413d9aa248bc9" Jan 20 05:13:22 crc kubenswrapper[4898]: E0120 05:13:22.871273 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c46ce28e53944b0b78327c0a008516608d7a22c85495c29c1413d9aa248bc9\": container with ID starting with 13c46ce28e53944b0b78327c0a008516608d7a22c85495c29c1413d9aa248bc9 not found: ID does not exist" containerID="13c46ce28e53944b0b78327c0a008516608d7a22c85495c29c1413d9aa248bc9" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.871306 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c46ce28e53944b0b78327c0a008516608d7a22c85495c29c1413d9aa248bc9"} err="failed to get container status \"13c46ce28e53944b0b78327c0a008516608d7a22c85495c29c1413d9aa248bc9\": rpc error: code = NotFound desc = could not find container \"13c46ce28e53944b0b78327c0a008516608d7a22c85495c29c1413d9aa248bc9\": container with ID starting with 13c46ce28e53944b0b78327c0a008516608d7a22c85495c29c1413d9aa248bc9 not found: ID does not exist" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.871331 4898 scope.go:117] "RemoveContainer" containerID="1664e9294758fc7c6ff2d1fb0806b929ecc9c385259898d4a7cc661f5634e48a" Jan 20 05:13:22 crc kubenswrapper[4898]: E0120 05:13:22.871617 4898 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1664e9294758fc7c6ff2d1fb0806b929ecc9c385259898d4a7cc661f5634e48a\": container with ID starting with 1664e9294758fc7c6ff2d1fb0806b929ecc9c385259898d4a7cc661f5634e48a not found: ID does not exist" containerID="1664e9294758fc7c6ff2d1fb0806b929ecc9c385259898d4a7cc661f5634e48a" Jan 20 05:13:22 crc kubenswrapper[4898]: I0120 05:13:22.871645 4898 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1664e9294758fc7c6ff2d1fb0806b929ecc9c385259898d4a7cc661f5634e48a"} err="failed to get container status \"1664e9294758fc7c6ff2d1fb0806b929ecc9c385259898d4a7cc661f5634e48a\": rpc error: code = NotFound desc = could not find container \"1664e9294758fc7c6ff2d1fb0806b929ecc9c385259898d4a7cc661f5634e48a\": container with ID starting with 1664e9294758fc7c6ff2d1fb0806b929ecc9c385259898d4a7cc661f5634e48a not found: ID does not exist" Jan 20 05:13:23 crc kubenswrapper[4898]: I0120 05:13:23.755441 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" path="/var/lib/kubelet/pods/efcc1499-2d99-4a89-9e0f-0d30fe0e1c26/volumes" Jan 20 05:13:39 crc kubenswrapper[4898]: I0120 05:13:39.976349 4898 patch_prober.go:28] interesting pod/machine-config-daemon-cwlf6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 05:13:39 crc kubenswrapper[4898]: I0120 05:13:39.976890 4898 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 05:13:39 crc kubenswrapper[4898]: I0120 05:13:39.976942 4898 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" Jan 20 05:13:39 crc kubenswrapper[4898]: I0120 05:13:39.977797 4898 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d539c2a651748ca7978ba9ba3cdb5c7a5dc4715a629b42ee2ce0b79ff74f2096"} pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 05:13:39 crc kubenswrapper[4898]: I0120 05:13:39.977868 4898 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" podUID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerName="machine-config-daemon" containerID="cri-o://d539c2a651748ca7978ba9ba3cdb5c7a5dc4715a629b42ee2ce0b79ff74f2096" gracePeriod=600 Jan 20 05:13:40 crc kubenswrapper[4898]: I0120 05:13:40.984942 4898 generic.go:334] "Generic (PLEG): container finished" podID="aef68392-4b9d-4a0c-a90e-8f04051fda21" containerID="d539c2a651748ca7978ba9ba3cdb5c7a5dc4715a629b42ee2ce0b79ff74f2096" exitCode=0 Jan 20 05:13:40 crc kubenswrapper[4898]: I0120 05:13:40.985010 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerDied","Data":"d539c2a651748ca7978ba9ba3cdb5c7a5dc4715a629b42ee2ce0b79ff74f2096"} Jan 20 05:13:40 crc kubenswrapper[4898]: I0120 05:13:40.985481 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cwlf6" event={"ID":"aef68392-4b9d-4a0c-a90e-8f04051fda21","Type":"ContainerStarted","Data":"81d4c7712893e62c6d4899d4d12d8ac0c62eab8342812db2d681c5a1b412937b"} Jan 20 05:13:40 crc kubenswrapper[4898]: I0120 05:13:40.985547 4898 scope.go:117] "RemoveContainer" containerID="cbb0f2754f54affa0f861f1d18005854a22c9f1986e4278275e704b29aa8f22c" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.160353 4898 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96"] Jan 20 05:15:00 crc kubenswrapper[4898]: E0120 05:15:00.211695 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" containerName="extract-utilities" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.211728 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" containerName="extract-utilities" Jan 20 05:15:00 crc kubenswrapper[4898]: E0120 05:15:00.211769 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" containerName="extract-content" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.211777 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" containerName="extract-content" Jan 20 05:15:00 crc kubenswrapper[4898]: E0120 05:15:00.211792 4898 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" containerName="registry-server" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.211800 4898 state_mem.go:107] "Deleted CPUSet assignment" podUID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" containerName="registry-server" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.212041 4898 memory_manager.go:354] "RemoveStaleState removing state" podUID="efcc1499-2d99-4a89-9e0f-0d30fe0e1c26" containerName="registry-server" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.212715 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96"] Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.212808 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.215352 4898 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.215878 4898 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.376200 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25ef49a-6a7d-498b-9b39-82749e027d0c-secret-volume\") pod \"collect-profiles-29481435-hgf96\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.376273 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25ef49a-6a7d-498b-9b39-82749e027d0c-config-volume\") pod \"collect-profiles-29481435-hgf96\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.376394 4898 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg4qs\" (UniqueName: \"kubernetes.io/projected/b25ef49a-6a7d-498b-9b39-82749e027d0c-kube-api-access-fg4qs\") pod \"collect-profiles-29481435-hgf96\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.478031 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25ef49a-6a7d-498b-9b39-82749e027d0c-secret-volume\") pod \"collect-profiles-29481435-hgf96\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.478454 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25ef49a-6a7d-498b-9b39-82749e027d0c-config-volume\") pod \"collect-profiles-29481435-hgf96\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.478576 4898 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg4qs\" (UniqueName: \"kubernetes.io/projected/b25ef49a-6a7d-498b-9b39-82749e027d0c-kube-api-access-fg4qs\") pod \"collect-profiles-29481435-hgf96\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.480314 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25ef49a-6a7d-498b-9b39-82749e027d0c-config-volume\") pod \"collect-profiles-29481435-hgf96\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.484184 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25ef49a-6a7d-498b-9b39-82749e027d0c-secret-volume\") pod \"collect-profiles-29481435-hgf96\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.496565 4898 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg4qs\" (UniqueName: \"kubernetes.io/projected/b25ef49a-6a7d-498b-9b39-82749e027d0c-kube-api-access-fg4qs\") pod \"collect-profiles-29481435-hgf96\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.543178 4898 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:00 crc kubenswrapper[4898]: I0120 05:15:00.993066 4898 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96"] Jan 20 05:15:01 crc kubenswrapper[4898]: I0120 05:15:01.814860 4898 generic.go:334] "Generic (PLEG): container finished" podID="b25ef49a-6a7d-498b-9b39-82749e027d0c" containerID="ca32fc106d959c79a418420934bd47f20c6b7343392a7b2ef6dd47f3634736dc" exitCode=0 Jan 20 05:15:01 crc kubenswrapper[4898]: I0120 05:15:01.815269 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" event={"ID":"b25ef49a-6a7d-498b-9b39-82749e027d0c","Type":"ContainerDied","Data":"ca32fc106d959c79a418420934bd47f20c6b7343392a7b2ef6dd47f3634736dc"} Jan 20 05:15:01 crc kubenswrapper[4898]: I0120 05:15:01.815318 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" event={"ID":"b25ef49a-6a7d-498b-9b39-82749e027d0c","Type":"ContainerStarted","Data":"a8de9dc0115fdcde6bbe110118e25d5ee9598cd526c51a47f5ac076f7e131f34"} Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.418874 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.546742 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25ef49a-6a7d-498b-9b39-82749e027d0c-config-volume\") pod \"b25ef49a-6a7d-498b-9b39-82749e027d0c\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.546856 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25ef49a-6a7d-498b-9b39-82749e027d0c-secret-volume\") pod \"b25ef49a-6a7d-498b-9b39-82749e027d0c\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.546900 4898 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg4qs\" (UniqueName: \"kubernetes.io/projected/b25ef49a-6a7d-498b-9b39-82749e027d0c-kube-api-access-fg4qs\") pod \"b25ef49a-6a7d-498b-9b39-82749e027d0c\" (UID: \"b25ef49a-6a7d-498b-9b39-82749e027d0c\") " Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.548138 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b25ef49a-6a7d-498b-9b39-82749e027d0c-config-volume" (OuterVolumeSpecName: "config-volume") pod "b25ef49a-6a7d-498b-9b39-82749e027d0c" (UID: "b25ef49a-6a7d-498b-9b39-82749e027d0c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.553931 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25ef49a-6a7d-498b-9b39-82749e027d0c-kube-api-access-fg4qs" (OuterVolumeSpecName: "kube-api-access-fg4qs") pod "b25ef49a-6a7d-498b-9b39-82749e027d0c" (UID: "b25ef49a-6a7d-498b-9b39-82749e027d0c"). InnerVolumeSpecName "kube-api-access-fg4qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.558675 4898 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25ef49a-6a7d-498b-9b39-82749e027d0c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b25ef49a-6a7d-498b-9b39-82749e027d0c" (UID: "b25ef49a-6a7d-498b-9b39-82749e027d0c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.648986 4898 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25ef49a-6a7d-498b-9b39-82749e027d0c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.649031 4898 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25ef49a-6a7d-498b-9b39-82749e027d0c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.649053 4898 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg4qs\" (UniqueName: \"kubernetes.io/projected/b25ef49a-6a7d-498b-9b39-82749e027d0c-kube-api-access-fg4qs\") on node \"crc\" DevicePath \"\"" Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.834832 4898 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" event={"ID":"b25ef49a-6a7d-498b-9b39-82749e027d0c","Type":"ContainerDied","Data":"a8de9dc0115fdcde6bbe110118e25d5ee9598cd526c51a47f5ac076f7e131f34"} Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.835218 4898 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8de9dc0115fdcde6bbe110118e25d5ee9598cd526c51a47f5ac076f7e131f34" Jan 20 05:15:03 crc kubenswrapper[4898]: I0120 05:15:03.834878 4898 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481435-hgf96" Jan 20 05:15:04 crc kubenswrapper[4898]: I0120 05:15:04.504725 4898 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp"] Jan 20 05:15:04 crc kubenswrapper[4898]: I0120 05:15:04.526745 4898 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481390-274pp"] Jan 20 05:15:05 crc kubenswrapper[4898]: I0120 05:15:05.732778 4898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb7a785-3b6e-40a9-ad1c-babff3bf1cef" path="/var/lib/kubelet/pods/8eb7a785-3b6e-40a9-ad1c-babff3bf1cef/volumes" Jan 20 05:15:20 crc kubenswrapper[4898]: I0120 05:15:20.943095 4898 scope.go:117] "RemoveContainer" containerID="68529a05f185e9c4737264bfb960294513d581c9dd9bf1147a3d80ccfe662438"